Jan 13 20:32:53.114862 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:32:53.114926 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:32:53.114952 kernel: BIOS-provided physical RAM map: Jan 13 20:32:53.114971 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:32:53.114990 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:32:53.115013 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:32:53.115035 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 13 20:32:53.115055 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 13 20:32:53.115074 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:32:53.115186 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:32:53.115207 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 13 20:32:53.115227 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:32:53.115256 kernel: NX (Execute Disable) protection: active Jan 13 20:32:53.115276 kernel: APIC: Static calls initialized Jan 13 20:32:53.115305 kernel: SMBIOS 3.0.0 present. Jan 13 20:32:53.115326 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 13 20:32:53.115346 kernel: Hypervisor detected: KVM Jan 13 20:32:53.115366 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:32:53.115387 kernel: kvm-clock: using sched offset of 5025855907 cycles Jan 13 20:32:53.115412 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:32:53.115434 kernel: tsc: Detected 1996.249 MHz processor Jan 13 20:32:53.115455 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:32:53.115477 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:32:53.115499 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 13 20:32:53.115520 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:32:53.117582 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:32:53.117605 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 13 20:32:53.117622 kernel: ACPI: Early table checksum verification disabled Jan 13 20:32:53.117645 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 13 20:32:53.117661 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:32:53.117676 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:32:53.117692 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:32:53.117708 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 13 20:32:53.117724 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:32:53.117740 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:32:53.117755 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 13 20:32:53.117771 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 13 20:32:53.117790 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 13 20:32:53.117806 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 13 20:32:53.117822 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 13 20:32:53.117843 kernel: No NUMA configuration found Jan 13 20:32:53.117860 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 13 20:32:53.117876 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] Jan 13 20:32:53.117893 kernel: Zone ranges: Jan 13 20:32:53.117912 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:32:53.117929 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 20:32:53.117945 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 13 20:32:53.117961 kernel: Movable zone start for each node Jan 13 20:32:53.117977 kernel: Early memory node ranges Jan 13 20:32:53.117994 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:32:53.118010 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 13 20:32:53.118026 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 13 20:32:53.118046 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 13 20:32:53.118062 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:32:53.118078 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:32:53.118095 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 13 20:32:53.118111 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:32:53.118128 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:32:53.118144 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:32:53.118162 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:32:53.118178 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:32:53.118198 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:32:53.118214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:32:53.118230 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:32:53.118247 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:32:53.118263 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:32:53.118280 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:32:53.118296 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 13 20:32:53.118312 kernel: Booting paravirtualized kernel on KVM Jan 13 20:32:53.118329 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:32:53.118350 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:32:53.118366 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:32:53.118383 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:32:53.118399 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:32:53.118414 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 13 20:32:53.118434 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:32:53.118452 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:32:53.118472 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:32:53.118489 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:32:53.118505 kernel: Fallback order for Node 0: 0 Jan 13 20:32:53.118522 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 13 20:32:53.118563 kernel: Policy zone: Normal Jan 13 20:32:53.118580 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:32:53.118597 kernel: software IO TLB: area num 2. Jan 13 20:32:53.118614 kernel: Memory: 3964168K/4193772K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 229344K reserved, 0K cma-reserved) Jan 13 20:32:53.118631 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:32:53.118652 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:32:53.118668 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:32:53.118685 kernel: Dynamic Preempt: voluntary Jan 13 20:32:53.118701 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:32:53.118719 kernel: rcu: RCU event tracing is enabled. Jan 13 20:32:53.118736 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:32:53.118753 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:32:53.118770 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:32:53.118786 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:32:53.118803 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:32:53.118823 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:32:53.118840 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:32:53.118856 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:32:53.118873 kernel: Console: colour VGA+ 80x25 Jan 13 20:32:53.118889 kernel: printk: console [tty0] enabled Jan 13 20:32:53.118905 kernel: printk: console [ttyS0] enabled Jan 13 20:32:53.118922 kernel: ACPI: Core revision 20230628 Jan 13 20:32:53.118939 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:32:53.118955 kernel: x2apic enabled Jan 13 20:32:53.118974 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:32:53.118991 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:32:53.119007 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:32:53.119024 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 13 20:32:53.119040 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 20:32:53.119056 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 20:32:53.119073 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:32:53.119089 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:32:53.119106 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:32:53.119125 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:32:53.119142 kernel: Speculative Store Bypass: Vulnerable Jan 13 20:32:53.119158 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 13 20:32:53.119174 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:32:53.119201 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:32:53.119221 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:32:53.119238 kernel: landlock: Up and running. Jan 13 20:32:53.119255 kernel: SELinux: Initializing. Jan 13 20:32:53.119272 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:32:53.119290 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:32:53.119307 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 13 20:32:53.119328 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:32:53.119346 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:32:53.119363 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:32:53.119380 kernel: Performance Events: AMD PMU driver. Jan 13 20:32:53.119397 kernel: ... version: 0 Jan 13 20:32:53.119417 kernel: ... bit width: 48 Jan 13 20:32:53.119434 kernel: ... generic registers: 4 Jan 13 20:32:53.119451 kernel: ... value mask: 0000ffffffffffff Jan 13 20:32:53.119468 kernel: ... max period: 00007fffffffffff Jan 13 20:32:53.119485 kernel: ... fixed-purpose events: 0 Jan 13 20:32:53.119502 kernel: ... event mask: 000000000000000f Jan 13 20:32:53.119519 kernel: signal: max sigframe size: 1440 Jan 13 20:32:53.121554 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:32:53.121570 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:32:53.121583 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:32:53.121593 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:32:53.121604 kernel: .... node #0, CPUs: #1 Jan 13 20:32:53.121613 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:32:53.121623 kernel: smpboot: Max logical packages: 2 Jan 13 20:32:53.121633 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 13 20:32:53.121643 kernel: devtmpfs: initialized Jan 13 20:32:53.121652 kernel: x86/mm: Memory block size: 128MB Jan 13 20:32:53.121662 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:32:53.121672 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:32:53.121684 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:32:53.121694 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:32:53.121704 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:32:53.121714 kernel: audit: type=2000 audit(1736800371.917:1): state=initialized audit_enabled=0 res=1 Jan 13 20:32:53.121723 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:32:53.121733 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:32:53.121743 kernel: cpuidle: using governor menu Jan 13 20:32:53.121753 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:32:53.121763 kernel: dca service started, version 1.12.1 Jan 13 20:32:53.121776 kernel: PCI: Using configuration type 1 for base access Jan 13 20:32:53.121785 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:32:53.121795 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:32:53.121804 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:32:53.121813 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:32:53.121822 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:32:53.121831 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:32:53.121840 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:32:53.121849 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:32:53.121860 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:32:53.121870 kernel: ACPI: Interpreter enabled Jan 13 20:32:53.121879 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:32:53.121888 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:32:53.121897 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:32:53.121906 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:32:53.121915 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 13 20:32:53.121924 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:32:53.122070 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:32:53.122170 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:32:53.122270 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:32:53.122284 kernel: acpiphp: Slot [3] registered Jan 13 20:32:53.122294 kernel: acpiphp: Slot [4] registered Jan 13 20:32:53.122303 kernel: acpiphp: Slot [5] registered Jan 13 20:32:53.122312 kernel: acpiphp: Slot [6] registered Jan 13 20:32:53.122321 kernel: acpiphp: Slot [7] registered Jan 13 20:32:53.122334 kernel: acpiphp: Slot [8] registered Jan 13 20:32:53.122345 kernel: acpiphp: Slot [9] registered Jan 13 20:32:53.122355 kernel: acpiphp: Slot [10] registered Jan 13 20:32:53.122365 kernel: acpiphp: Slot [11] registered Jan 13 20:32:53.122374 kernel: acpiphp: Slot [12] registered Jan 13 20:32:53.122384 kernel: acpiphp: Slot [13] registered Jan 13 20:32:53.122394 kernel: acpiphp: Slot [14] registered Jan 13 20:32:53.122403 kernel: acpiphp: Slot [15] registered Jan 13 20:32:53.122413 kernel: acpiphp: Slot [16] registered Jan 13 20:32:53.122425 kernel: acpiphp: Slot [17] registered Jan 13 20:32:53.122435 kernel: acpiphp: Slot [18] registered Jan 13 20:32:53.122444 kernel: acpiphp: Slot [19] registered Jan 13 20:32:53.122454 kernel: acpiphp: Slot [20] registered Jan 13 20:32:53.122464 kernel: acpiphp: Slot [21] registered Jan 13 20:32:53.122473 kernel: acpiphp: Slot [22] registered Jan 13 20:32:53.122483 kernel: acpiphp: Slot [23] registered Jan 13 20:32:53.122493 kernel: acpiphp: Slot [24] registered Jan 13 20:32:53.122503 kernel: acpiphp: Slot [25] registered Jan 13 20:32:53.122512 kernel: acpiphp: Slot [26] registered Jan 13 20:32:53.122524 kernel: acpiphp: Slot [27] registered Jan 13 20:32:53.122553 kernel: acpiphp: Slot [28] registered Jan 13 20:32:53.122564 kernel: acpiphp: Slot [29] registered Jan 13 20:32:53.122574 kernel: acpiphp: Slot [30] registered Jan 13 20:32:53.122584 kernel: acpiphp: Slot [31] registered Jan 13 20:32:53.122593 kernel: PCI host bridge to bus 0000:00 Jan 13 20:32:53.122696 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:32:53.122784 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:32:53.122871 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:32:53.122951 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:32:53.123031 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 13 20:32:53.123116 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:32:53.123217 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:32:53.123316 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 20:32:53.123414 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 13 20:32:53.123511 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 13 20:32:53.124785 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 20:32:53.124902 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 20:32:53.124993 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 20:32:53.125083 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 20:32:53.125184 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 20:32:53.125280 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 13 20:32:53.125371 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 13 20:32:53.125468 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 13 20:32:53.125584 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 13 20:32:53.125676 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 13 20:32:53.125765 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 13 20:32:53.125853 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 13 20:32:53.125947 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:32:53.126044 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:32:53.126134 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 13 20:32:53.126225 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 13 20:32:53.126316 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 13 20:32:53.126405 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 13 20:32:53.126503 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:32:53.128698 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:32:53.128813 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 13 20:32:53.128903 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 13 20:32:53.129002 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 13 20:32:53.129093 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 13 20:32:53.129182 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 13 20:32:53.129277 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:32:53.129374 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 13 20:32:53.129463 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 13 20:32:53.129571 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 13 20:32:53.129586 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:32:53.129595 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:32:53.129605 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:32:53.129614 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:32:53.129624 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:32:53.129637 kernel: iommu: Default domain type: Translated Jan 13 20:32:53.129647 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:32:53.129656 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:32:53.129665 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:32:53.129675 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:32:53.129684 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 13 20:32:53.129774 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 13 20:32:53.129864 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 13 20:32:53.129958 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:32:53.129972 kernel: vgaarb: loaded Jan 13 20:32:53.129981 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:32:53.129991 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:32:53.130000 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:32:53.130010 kernel: pnp: PnP ACPI init Jan 13 20:32:53.130099 kernel: pnp 00:03: [dma 2] Jan 13 20:32:53.130115 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:32:53.130124 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:32:53.130138 kernel: NET: Registered PF_INET protocol family Jan 13 20:32:53.130147 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:32:53.130157 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:32:53.130166 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:32:53.130176 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:32:53.130185 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:32:53.130195 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:32:53.130204 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:32:53.130214 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:32:53.130225 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:32:53.130235 kernel: NET: Registered PF_XDP protocol family Jan 13 20:32:53.130317 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:32:53.130398 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:32:53.130478 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:32:53.132589 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 13 20:32:53.132675 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 13 20:32:53.132780 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 13 20:32:53.132879 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:32:53.132894 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:32:53.132904 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 20:32:53.132913 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 13 20:32:53.132922 kernel: Initialise system trusted keyrings Jan 13 20:32:53.132932 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:32:53.132941 kernel: Key type asymmetric registered Jan 13 20:32:53.132950 kernel: Asymmetric key parser 'x509' registered Jan 13 20:32:53.132963 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:32:53.132972 kernel: io scheduler mq-deadline registered Jan 13 20:32:53.132981 kernel: io scheduler kyber registered Jan 13 20:32:53.132991 kernel: io scheduler bfq registered Jan 13 20:32:53.133000 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:32:53.133010 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 13 20:32:53.133019 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 20:32:53.133028 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 20:32:53.133038 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 20:32:53.133049 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:32:53.133059 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:32:53.133068 kernel: random: crng init done Jan 13 20:32:53.133077 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:32:53.133086 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:32:53.133096 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:32:53.133190 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:32:53.133206 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:32:53.133284 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:32:53.133370 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:32:52 UTC (1736800372) Jan 13 20:32:53.133452 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 20:32:53.133466 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:32:53.133475 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:32:53.133484 kernel: Segment Routing with IPv6 Jan 13 20:32:53.133494 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:32:53.133503 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:32:53.133512 kernel: Key type dns_resolver registered Jan 13 20:32:53.133525 kernel: IPI shorthand broadcast: enabled Jan 13 20:32:53.133548 kernel: sched_clock: Marking stable (1021007140, 168337220)->(1219978759, -30634399) Jan 13 20:32:53.133558 kernel: registered taskstats version 1 Jan 13 20:32:53.133567 kernel: Loading compiled-in X.509 certificates Jan 13 20:32:53.133576 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:32:53.133586 kernel: Key type .fscrypt registered Jan 13 20:32:53.133595 kernel: Key type fscrypt-provisioning registered Jan 13 20:32:53.133604 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:32:53.133613 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:32:53.133625 kernel: ima: No architecture policies found Jan 13 20:32:53.133634 kernel: clk: Disabling unused clocks Jan 13 20:32:53.133643 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:32:53.133653 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:32:53.133662 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:32:53.133671 kernel: Run /init as init process Jan 13 20:32:53.133681 kernel: with arguments: Jan 13 20:32:53.133690 kernel: /init Jan 13 20:32:53.133699 kernel: with environment: Jan 13 20:32:53.133709 kernel: HOME=/ Jan 13 20:32:53.133719 kernel: TERM=linux Jan 13 20:32:53.133728 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:32:53.133740 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:32:53.133753 systemd[1]: Detected virtualization kvm. Jan 13 20:32:53.133763 systemd[1]: Detected architecture x86-64. Jan 13 20:32:53.133773 systemd[1]: Running in initrd. Jan 13 20:32:53.133785 systemd[1]: No hostname configured, using default hostname. Jan 13 20:32:53.133795 systemd[1]: Hostname set to <localhost>. Jan 13 20:32:53.133805 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:32:53.133815 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:32:53.133825 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:32:53.133836 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:32:53.133846 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:32:53.133865 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:32:53.133877 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:32:53.133888 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:32:53.133900 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:32:53.133910 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:32:53.133921 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:32:53.133933 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:32:53.133944 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:32:53.133954 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:32:53.133964 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:32:53.133974 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:32:53.133984 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:32:53.133995 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:32:53.134005 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:32:53.134017 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:32:53.134028 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:32:53.134038 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:32:53.134048 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:32:53.134059 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:32:53.134069 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:32:53.134079 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:32:53.134090 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:32:53.134100 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:32:53.134112 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:32:53.134123 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:32:53.134133 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:32:53.134143 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:32:53.134153 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:32:53.134182 systemd-journald[184]: Collecting audit messages is disabled. Jan 13 20:32:53.134211 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:32:53.134226 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:32:53.134237 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:32:53.134248 systemd-journald[184]: Journal started Jan 13 20:32:53.134271 systemd-journald[184]: Runtime Journal (/run/log/journal/d93e6e6ca98c43619e5fdbbe84c6a54f) is 8.0M, max 78.3M, 70.3M free. Jan 13 20:32:53.113871 systemd-modules-load[185]: Inserted module 'overlay' Jan 13 20:32:53.179906 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:32:53.179929 kernel: Bridge firewalling registered Jan 13 20:32:53.179942 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:32:53.147747 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 13 20:32:53.181487 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:32:53.182169 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:32:53.187715 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:32:53.190692 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:32:53.191839 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:32:53.194759 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:32:53.215951 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:32:53.220780 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:32:53.221784 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:32:53.225878 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:32:53.227358 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:32:53.236836 dracut-cmdline[215]: dracut-dracut-053 Jan 13 20:32:53.241178 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:32:53.238684 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:32:53.279639 systemd-resolved[225]: Positive Trust Anchors: Jan 13 20:32:53.279652 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:32:53.279694 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:32:53.282678 systemd-resolved[225]: Defaulting to hostname 'linux'. Jan 13 20:32:53.283614 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:32:53.285387 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:32:53.336683 kernel: SCSI subsystem initialized Jan 13 20:32:53.347657 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:32:53.365814 kernel: iscsi: registered transport (tcp) Jan 13 20:32:53.388772 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:32:53.388858 kernel: QLogic iSCSI HBA Driver Jan 13 20:32:53.437788 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:32:53.446792 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:32:53.509246 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:32:53.509395 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:32:53.513326 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:32:53.568610 kernel: raid6: sse2x4 gen() 12707 MB/s Jan 13 20:32:53.586578 kernel: raid6: sse2x2 gen() 14595 MB/s Jan 13 20:32:53.604901 kernel: raid6: sse2x1 gen() 10142 MB/s Jan 13 20:32:53.604981 kernel: raid6: using algorithm sse2x2 gen() 14595 MB/s Jan 13 20:32:53.623951 kernel: raid6: .... xor() 9229 MB/s, rmw enabled Jan 13 20:32:53.624049 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 20:32:53.647785 kernel: xor: measuring software checksum speed Jan 13 20:32:53.647939 kernel: prefetch64-sse : 18483 MB/sec Jan 13 20:32:53.651182 kernel: generic_sse : 14484 MB/sec Jan 13 20:32:53.651283 kernel: xor: using function: prefetch64-sse (18483 MB/sec) Jan 13 20:32:53.818621 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:32:53.837904 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:32:53.844939 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:32:53.857606 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 13 20:32:53.862441 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:32:53.872840 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:32:53.899800 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jan 13 20:32:53.945620 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:32:53.955881 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:32:54.028184 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:32:54.043943 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:32:54.097884 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:32:54.100765 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:32:54.102287 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:32:54.103675 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:32:54.109709 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:32:54.137670 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 13 20:32:54.170263 kernel: libata version 3.00 loaded. Jan 13 20:32:54.170283 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 13 20:32:54.170435 kernel: scsi host0: ata_piix Jan 13 20:32:54.170582 kernel: scsi host1: ata_piix Jan 13 20:32:54.170698 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 13 20:32:54.170713 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 13 20:32:54.170725 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 13 20:32:54.170831 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:32:54.170845 kernel: GPT:17805311 != 20971519 Jan 13 20:32:54.170856 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:32:54.170872 kernel: GPT:17805311 != 20971519 Jan 13 20:32:54.170883 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:32:54.170894 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:32:54.138213 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:32:54.152170 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:32:54.152288 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:32:54.177775 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:32:54.178593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:32:54.178751 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:32:54.179728 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:32:54.190795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:32:54.246902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:32:54.254795 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:32:54.278237 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:32:54.350819 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (450) Jan 13 20:32:54.370573 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (457) Jan 13 20:32:54.379413 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:32:54.386151 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:32:54.392846 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:32:54.398639 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:32:54.399233 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:32:54.405784 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:32:54.418891 disk-uuid[513]: Primary Header is updated. Jan 13 20:32:54.418891 disk-uuid[513]: Secondary Entries is updated. Jan 13 20:32:54.418891 disk-uuid[513]: Secondary Header is updated. Jan 13 20:32:54.430573 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:32:55.450596 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:32:55.451666 disk-uuid[514]: The operation has completed successfully. Jan 13 20:32:55.539299 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:32:55.539431 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:32:55.559690 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:32:55.564513 sh[525]: Success Jan 13 20:32:55.582766 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 13 20:32:55.643331 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:32:55.644746 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:32:55.647299 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:32:55.667588 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:32:55.667621 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:32:55.671120 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:32:55.671142 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:32:55.672886 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:32:55.688904 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:32:55.691063 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:32:55.698841 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:32:55.703340 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:32:55.719771 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:55.719833 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:32:55.719847 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:32:55.726558 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:32:55.744412 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:32:55.750574 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:55.766946 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:32:55.773752 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:32:55.841882 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:32:55.848725 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:32:55.889819 systemd-networkd[709]: lo: Link UP Jan 13 20:32:55.890585 systemd-networkd[709]: lo: Gained carrier Jan 13 20:32:55.891996 systemd-networkd[709]: Enumeration completed Jan 13 20:32:55.892716 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:32:55.893471 systemd[1]: Reached target network.target - Network. Jan 13 20:32:55.894703 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:32:55.894707 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:32:55.899089 systemd-networkd[709]: eth0: Link UP Jan 13 20:32:55.899097 systemd-networkd[709]: eth0: Gained carrier Jan 13 20:32:55.899169 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:32:55.915594 systemd-networkd[709]: eth0: DHCPv4 address 172.24.4.95/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 20:32:55.932557 ignition[648]: Ignition 2.20.0 Jan 13 20:32:55.933284 ignition[648]: Stage: fetch-offline Jan 13 20:32:55.933323 ignition[648]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:55.933333 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:32:55.934813 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:32:55.933427 ignition[648]: parsed url from cmdline: "" Jan 13 20:32:55.933431 ignition[648]: no config URL provided Jan 13 20:32:55.933437 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:32:55.933445 ignition[648]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:32:55.933450 ignition[648]: failed to fetch config: resource requires networking Jan 13 20:32:55.933661 ignition[648]: Ignition finished successfully Jan 13 20:32:55.941771 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:32:55.955875 ignition[717]: Ignition 2.20.0 Jan 13 20:32:55.955887 ignition[717]: Stage: fetch Jan 13 20:32:55.956061 ignition[717]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:55.956073 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:32:55.956162 ignition[717]: parsed url from cmdline: "" Jan 13 20:32:55.956166 ignition[717]: no config URL provided Jan 13 20:32:55.956171 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:32:55.956180 ignition[717]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:32:55.956265 ignition[717]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 20:32:55.956324 ignition[717]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 20:32:55.956365 ignition[717]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 20:32:56.153445 ignition[717]: GET result: OK Jan 13 20:32:56.154323 ignition[717]: parsing config with SHA512: f9ddfc3d0acf76e36f35796a4dc8e37bfc6c9a440782106c94894b82cbb740447076296799a4b01023ac71a6ab2ad46bdb90ca276ebd3a014eec39695efd9bec Jan 13 20:32:56.161229 unknown[717]: fetched base config from "system" Jan 13 20:32:56.161254 unknown[717]: fetched base config from "system" Jan 13 20:32:56.161855 ignition[717]: fetch: fetch complete Jan 13 20:32:56.161269 unknown[717]: fetched user config from "openstack" Jan 13 20:32:56.161868 ignition[717]: fetch: fetch passed Jan 13 20:32:56.165242 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:32:56.161957 ignition[717]: Ignition finished successfully Jan 13 20:32:56.166160 systemd-resolved[225]: Detected conflict on linux IN A 172.24.4.95 Jan 13 20:32:56.166180 systemd-resolved[225]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jan 13 20:32:56.181966 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:32:56.213830 ignition[723]: Ignition 2.20.0 Jan 13 20:32:56.213856 ignition[723]: Stage: kargs Jan 13 20:32:56.214274 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:56.214303 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:32:56.219396 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:32:56.216253 ignition[723]: kargs: kargs passed Jan 13 20:32:56.216352 ignition[723]: Ignition finished successfully Jan 13 20:32:56.231858 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:32:56.271154 ignition[731]: Ignition 2.20.0 Jan 13 20:32:56.271175 ignition[731]: Stage: disks Jan 13 20:32:56.271620 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:56.271648 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:32:56.275472 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:32:56.273467 ignition[731]: disks: disks passed Jan 13 20:32:56.278270 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:32:56.273637 ignition[731]: Ignition finished successfully Jan 13 20:32:56.280855 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:32:56.283469 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:32:56.285881 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:32:56.288813 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:32:56.297881 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:32:56.348109 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:32:56.363950 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:32:56.371742 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:32:56.549572 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:32:56.550331 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:32:56.552222 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:32:56.559775 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:32:56.563016 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:32:56.566222 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:32:56.571862 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 20:32:56.591603 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (747) Jan 13 20:32:56.591654 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:56.591686 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:32:56.591716 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:32:56.572471 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:32:56.572502 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:32:56.619588 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:32:56.590018 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:32:56.609785 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:32:56.630904 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:32:56.762296 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:32:56.772936 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:32:56.783215 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:32:56.794807 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:32:56.937909 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:32:56.943784 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:32:56.950878 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:32:56.958292 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:32:56.960429 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:32:57.000000 ignition[864]: INFO : Ignition 2.20.0 Jan 13 20:32:57.000924 ignition[864]: INFO : Stage: mount Jan 13 20:32:57.002282 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:32:57.002282 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:32:57.004882 ignition[864]: INFO : mount: mount passed Jan 13 20:32:57.004882 ignition[864]: INFO : Ignition finished successfully Jan 13 20:32:57.005235 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:32:57.011221 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:32:57.556142 systemd-networkd[709]: eth0: Gained IPv6LL Jan 13 20:33:03.852718 coreos-metadata[749]: Jan 13 20:33:03.852 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:33:03.897314 coreos-metadata[749]: Jan 13 20:33:03.897 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:33:03.920058 coreos-metadata[749]: Jan 13 20:33:03.919 INFO Fetch successful Jan 13 20:33:03.922121 coreos-metadata[749]: Jan 13 20:33:03.921 INFO wrote hostname ci-4186-1-0-8-4ccf0e7571.novalocal to /sysroot/etc/hostname Jan 13 20:33:03.925532 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 20:33:03.925881 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 20:33:03.943862 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:33:03.970990 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:33:04.078650 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (882) Jan 13 20:33:04.090604 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:33:04.090709 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:33:04.094454 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:33:04.105605 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:33:04.112976 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:33:04.162950 ignition[900]: INFO : Ignition 2.20.0 Jan 13 20:33:04.162950 ignition[900]: INFO : Stage: files Jan 13 20:33:04.166019 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:33:04.166019 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:33:04.166019 ignition[900]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:33:04.171900 ignition[900]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:33:04.171900 ignition[900]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:33:04.176100 ignition[900]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:33:04.176100 ignition[900]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:33:04.179910 ignition[900]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:33:04.176801 unknown[900]: wrote ssh authorized keys file for user: core Jan 13 20:33:04.183876 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:33:04.183876 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:33:04.183876 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:33:04.183876 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:33:04.183876 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:33:04.183876 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:33:04.183876 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:33:04.183876 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 20:33:04.679028 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:33:07.142676 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:33:07.144157 ignition[900]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:33:07.144157 ignition[900]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:33:07.144157 ignition[900]: INFO : files: files passed Jan 13 20:33:07.144157 ignition[900]: INFO : Ignition finished successfully Jan 13 20:33:07.146798 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:33:07.156813 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:33:07.161618 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:33:07.167856 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:33:07.167964 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:33:07.188895 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:33:07.190661 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:33:07.190661 initrd-setup-root-after-ignition[928]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:33:07.193071 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:33:07.196706 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:33:07.204861 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:33:07.265406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:33:07.265746 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:33:07.269315 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:33:07.271725 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:33:07.274711 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:33:07.281015 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:33:07.314427 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:33:07.327823 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:33:07.355470 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:33:07.357220 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:33:07.360336 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:33:07.363218 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:33:07.363501 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:33:07.366722 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:33:07.368698 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:33:07.371680 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:33:07.374071 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:33:07.386173 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:33:07.389107 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:33:07.392008 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:33:07.395043 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:33:07.397876 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:33:07.400825 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:33:07.403499 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:33:07.403855 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:33:07.407010 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:33:07.408994 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:33:07.411377 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:33:07.411691 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:33:07.414344 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:33:07.414675 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:33:07.418641 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:33:07.418987 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:33:07.421973 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:33:07.422234 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:33:07.433864 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:33:07.436307 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:33:07.437840 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:33:07.447029 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:33:07.448344 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:33:07.450680 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:33:07.454641 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:33:07.455242 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:33:07.468658 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:33:07.468822 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:33:07.476592 ignition[952]: INFO : Ignition 2.20.0 Jan 13 20:33:07.477398 ignition[952]: INFO : Stage: umount Jan 13 20:33:07.478596 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:33:07.478596 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:33:07.480471 ignition[952]: INFO : umount: umount passed Jan 13 20:33:07.480471 ignition[952]: INFO : Ignition finished successfully Jan 13 20:33:07.482445 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:33:07.482582 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:33:07.484402 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:33:07.484459 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:33:07.485897 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:33:07.485937 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:33:07.487024 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:33:07.487063 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:33:07.489895 systemd[1]: Stopped target network.target - Network. Jan 13 20:33:07.490839 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:33:07.490882 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:33:07.491882 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:33:07.492859 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:33:07.498649 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:33:07.499967 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:33:07.500745 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:33:07.501825 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:33:07.501870 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:33:07.502804 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:33:07.502836 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:33:07.503784 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:33:07.503834 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:33:07.504801 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:33:07.504841 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:33:07.505903 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:33:07.507091 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:33:07.509072 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:33:07.509664 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:33:07.509750 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:33:07.511188 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:33:07.511243 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:33:07.512611 systemd-networkd[709]: eth0: DHCPv6 lease lost Jan 13 20:33:07.514134 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:33:07.514406 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:33:07.515387 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:33:07.515418 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:33:07.522766 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:33:07.523530 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:33:07.523600 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:33:07.524250 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:33:07.525139 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:33:07.525240 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:33:07.530354 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:33:07.530432 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:33:07.534773 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:33:07.534830 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:33:07.536060 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:33:07.536101 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:33:07.539093 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:33:07.539225 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:33:07.540676 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:33:07.540782 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:33:07.542482 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:33:07.542553 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:33:07.543773 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:33:07.543807 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:33:07.548698 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:33:07.548756 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:33:07.550360 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:33:07.550399 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:33:07.551506 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:33:07.551569 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:33:07.558714 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:33:07.560911 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:33:07.560987 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:33:07.561593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:33:07.561643 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:33:07.564846 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:33:07.564969 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:33:07.566701 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:33:07.571686 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:33:07.581401 systemd[1]: Switching root. Jan 13 20:33:07.616403 systemd-journald[184]: Journal stopped Jan 13 20:33:09.316867 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 13 20:33:09.316939 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:33:09.316958 kernel: SELinux: policy capability open_perms=1 Jan 13 20:33:09.316970 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:33:09.316982 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:33:09.316995 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:33:09.317010 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:33:09.317022 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:33:09.317035 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:33:09.317048 systemd[1]: Successfully loaded SELinux policy in 77.859ms. Jan 13 20:33:09.317069 kernel: audit: type=1403 audit(1736800388.225:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:33:09.317082 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.467ms. Jan 13 20:33:09.317096 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:33:09.317110 systemd[1]: Detected virtualization kvm. Jan 13 20:33:09.317123 systemd[1]: Detected architecture x86-64. Jan 13 20:33:09.317138 systemd[1]: Detected first boot. Jan 13 20:33:09.317152 systemd[1]: Hostname set to <ci-4186-1-0-8-4ccf0e7571.novalocal>. Jan 13 20:33:09.317173 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:33:09.317186 zram_generator::config[995]: No configuration found. Jan 13 20:33:09.317202 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:33:09.317215 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:33:09.317227 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:33:09.317240 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:33:09.317254 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:33:09.317266 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:33:09.317279 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:33:09.317291 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:33:09.317304 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:33:09.317322 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:33:09.317336 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:33:09.317349 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:33:09.317363 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:33:09.317376 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:33:09.317390 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:33:09.317404 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:33:09.317422 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:33:09.317436 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:33:09.317452 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:33:09.317465 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:33:09.317479 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:33:09.317493 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:33:09.317507 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:33:09.317520 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:33:09.321571 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:33:09.321609 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:33:09.321625 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:33:09.321639 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:33:09.321653 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:33:09.321667 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:33:09.321681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:33:09.321695 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:33:09.321708 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:33:09.321731 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:33:09.321745 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:33:09.321759 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:33:09.321773 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:33:09.321787 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:09.321801 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:33:09.321814 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:33:09.321828 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:33:09.321843 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:33:09.321860 systemd[1]: Reached target machines.target - Containers. Jan 13 20:33:09.321873 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:33:09.321887 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:33:09.321902 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:33:09.321915 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:33:09.321929 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:33:09.321943 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:33:09.321957 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:33:09.321973 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:33:09.321986 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:33:09.322001 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:33:09.322015 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:33:09.322028 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:33:09.322042 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:33:09.322055 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:33:09.322069 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:33:09.322084 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:33:09.322100 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:33:09.322114 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:33:09.322127 kernel: loop: module loaded Jan 13 20:33:09.322142 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:33:09.322155 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:33:09.322169 systemd[1]: Stopped verity-setup.service. Jan 13 20:33:09.322183 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:09.322198 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:33:09.322212 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:33:09.322230 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:33:09.322245 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:33:09.322258 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:33:09.322272 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:33:09.322287 kernel: fuse: init (API version 7.39) Jan 13 20:33:09.322300 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:33:09.322315 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:33:09.322328 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:33:09.322342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:33:09.322356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:33:09.322370 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:33:09.322385 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:33:09.322401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:33:09.322446 systemd-journald[1084]: Collecting audit messages is disabled. Jan 13 20:33:09.322482 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:33:09.322499 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:33:09.322512 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:33:09.322525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:33:09.324952 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:33:09.324977 systemd-journald[1084]: Journal started Jan 13 20:33:09.325005 systemd-journald[1084]: Runtime Journal (/run/log/journal/d93e6e6ca98c43619e5fdbbe84c6a54f) is 8.0M, max 78.3M, 70.3M free. Jan 13 20:33:08.929826 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:33:08.963204 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:33:08.963626 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:33:09.331577 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:33:09.331633 kernel: ACPI: bus type drm_connector registered Jan 13 20:33:09.332465 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:33:09.332860 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:33:09.333677 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:33:09.334414 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:33:09.345414 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:33:09.352747 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:33:09.356353 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:33:09.356952 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:33:09.356993 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:33:09.358691 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:33:09.363785 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:33:09.367780 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:33:09.368371 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:33:09.374673 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:33:09.383688 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:33:09.384704 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:33:09.385856 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:33:09.387026 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:33:09.391659 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:33:09.393400 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:33:09.395484 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:33:09.397479 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:33:09.399143 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:33:09.400890 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:33:09.411871 systemd-journald[1084]: Time spent on flushing to /var/log/journal/d93e6e6ca98c43619e5fdbbe84c6a54f is 69.712ms for 929 entries. Jan 13 20:33:09.411871 systemd-journald[1084]: System Journal (/var/log/journal/d93e6e6ca98c43619e5fdbbe84c6a54f) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:33:09.538703 systemd-journald[1084]: Received client request to flush runtime journal. Jan 13 20:33:09.538776 kernel: loop0: detected capacity change from 0 to 138184 Jan 13 20:33:09.443640 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:33:09.445644 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:33:09.449447 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:33:09.456989 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:33:09.464673 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:33:09.483513 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:33:09.496797 udevadm[1138]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:33:09.541842 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:33:09.578820 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:33:09.587374 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:33:09.588988 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:33:09.604850 kernel: loop1: detected capacity change from 0 to 141000 Jan 13 20:33:09.605121 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:33:09.615953 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:33:09.655623 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Jan 13 20:33:09.655652 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Jan 13 20:33:09.661631 kernel: loop2: detected capacity change from 0 to 205544 Jan 13 20:33:09.662472 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:33:09.722633 kernel: loop3: detected capacity change from 0 to 8 Jan 13 20:33:09.742033 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 20:33:09.803659 kernel: loop5: detected capacity change from 0 to 141000 Jan 13 20:33:09.912126 kernel: loop6: detected capacity change from 0 to 205544 Jan 13 20:33:09.967574 kernel: loop7: detected capacity change from 0 to 8 Jan 13 20:33:09.968434 (sd-merge)[1153]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 20:33:09.968948 (sd-merge)[1153]: Merged extensions into '/usr'. Jan 13 20:33:09.977700 systemd[1]: Reloading requested from client PID 1128 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:33:09.977718 systemd[1]: Reloading... Jan 13 20:33:10.070627 zram_generator::config[1175]: No configuration found. Jan 13 20:33:10.263007 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:33:10.324267 systemd[1]: Reloading finished in 346 ms. Jan 13 20:33:10.349765 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:33:10.359065 systemd[1]: Starting ensure-sysext.service... Jan 13 20:33:10.362704 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:33:10.377611 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:33:10.377647 systemd[1]: Reloading... Jan 13 20:33:10.402241 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:33:10.402985 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:33:10.404655 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:33:10.405096 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 13 20:33:10.405232 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 13 20:33:10.420080 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:33:10.420325 systemd-tmpfiles[1235]: Skipping /boot Jan 13 20:33:10.436734 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:33:10.437084 systemd-tmpfiles[1235]: Skipping /boot Jan 13 20:33:10.464845 zram_generator::config[1259]: No configuration found. Jan 13 20:33:10.552608 ldconfig[1123]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:33:10.638888 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:33:10.700502 systemd[1]: Reloading finished in 322 ms. Jan 13 20:33:10.716431 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:33:10.717481 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:33:10.718456 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:33:10.738203 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:33:10.743713 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:33:10.750782 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:33:10.762790 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:33:10.774737 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:33:10.786783 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:33:10.799621 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:10.799832 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:33:10.805871 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:33:10.807805 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:33:10.817857 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:33:10.818593 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:33:10.828283 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:33:10.829608 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:10.830760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:33:10.830927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:33:10.841968 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:10.842223 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:33:10.847609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:33:10.848681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:33:10.848929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:10.856835 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:33:10.862400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:10.862710 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:33:10.867757 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:33:10.869181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:33:10.869294 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:10.870432 systemd[1]: Finished ensure-sysext.service. Jan 13 20:33:10.872235 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:33:10.872388 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:33:10.887762 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:33:10.889717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:33:10.889917 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:33:10.898965 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:33:10.900640 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:33:10.901583 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:33:10.906087 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:33:10.906143 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:33:10.915693 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:33:10.917789 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 13 20:33:10.923805 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:33:10.924598 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:33:10.925412 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:33:10.925986 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:33:10.927509 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:33:10.938983 augenrules[1367]: No rules Jan 13 20:33:10.940417 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:33:10.940651 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:33:10.962732 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:33:10.991162 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:33:11.004640 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:33:11.027359 systemd-resolved[1325]: Positive Trust Anchors: Jan 13 20:33:11.030570 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:33:11.030618 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:33:11.037336 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:33:11.038093 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:33:11.038458 systemd-resolved[1325]: Using system hostname 'ci-4186-1-0-8-4ccf0e7571.novalocal'. Jan 13 20:33:11.041474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:33:11.042650 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:33:11.085986 systemd-networkd[1380]: lo: Link UP Jan 13 20:33:11.086281 systemd-networkd[1380]: lo: Gained carrier Jan 13 20:33:11.086871 systemd-networkd[1380]: Enumeration completed Jan 13 20:33:11.087772 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:33:11.089315 systemd[1]: Reached target network.target - Network. Jan 13 20:33:11.100780 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:33:11.109188 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:33:11.119572 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1394) Jan 13 20:33:11.147241 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:33:11.147254 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:33:11.149692 systemd-networkd[1380]: eth0: Link UP Jan 13 20:33:11.149700 systemd-networkd[1380]: eth0: Gained carrier Jan 13 20:33:11.149724 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:33:11.169696 systemd-networkd[1380]: eth0: DHCPv4 address 172.24.4.95/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 20:33:11.170387 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jan 13 20:33:11.170580 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jan 13 20:33:11.198576 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:33:11.198393 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:33:11.204573 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 13 20:33:11.207553 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:33:11.206704 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:33:11.224666 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:33:11.265572 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:33:11.273594 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:33:11.279470 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:33:11.286769 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 13 20:33:11.291667 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 13 20:33:11.294417 kernel: Console: switching to colour dummy device 80x25 Jan 13 20:33:11.299670 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:33:11.299982 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:33:11.300676 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 20:33:11.300761 kernel: [drm] features: -context_init Jan 13 20:33:11.303574 kernel: [drm] number of scanouts: 1 Jan 13 20:33:11.303615 kernel: [drm] number of cap sets: 0 Jan 13 20:33:11.307578 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 13 20:33:11.308917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:33:11.313727 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 13 20:33:11.313819 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:33:11.330579 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 20:33:11.334524 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:33:11.334857 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:33:11.341689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:33:11.341965 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:33:11.347742 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:33:11.369370 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:33:11.399608 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:33:11.399926 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:33:11.405698 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:33:11.412743 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:33:11.436204 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:33:11.436504 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:33:11.436705 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:33:11.436830 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:33:11.437075 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:33:11.437263 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:33:11.437345 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:33:11.437424 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:33:11.437455 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:33:11.437516 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:33:11.441598 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:33:11.443174 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:33:11.450454 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:33:11.452769 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:33:11.454685 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:33:11.456449 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:33:11.457738 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:33:11.458689 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:33:11.458718 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:33:11.470113 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:33:11.477927 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:33:11.487805 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:33:11.498657 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:33:11.508814 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:33:11.511185 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:33:11.518607 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:33:11.523309 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:33:11.528758 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:33:11.531100 jq[1439]: false Jan 13 20:33:11.539721 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:33:11.543195 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:33:11.548105 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:33:11.552241 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:33:11.558277 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:33:11.564862 dbus-daemon[1436]: [system] SELinux support is enabled Jan 13 20:33:11.565093 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:33:11.565708 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:33:11.565898 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:33:11.577484 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:33:11.577526 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:33:11.579969 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:33:11.579990 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:33:11.588129 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:33:11.588304 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:33:11.619515 extend-filesystems[1440]: Found loop4 Jan 13 20:33:11.635770 extend-filesystems[1440]: Found loop5 Jan 13 20:33:11.635770 extend-filesystems[1440]: Found loop6 Jan 13 20:33:11.635770 extend-filesystems[1440]: Found loop7 Jan 13 20:33:11.635770 extend-filesystems[1440]: Found vda Jan 13 20:33:11.635770 extend-filesystems[1440]: Found vda1 Jan 13 20:33:11.635770 extend-filesystems[1440]: Found vda2 Jan 13 20:33:11.635770 extend-filesystems[1440]: Found vda3 Jan 13 20:33:11.635770 extend-filesystems[1440]: Found usr Jan 13 20:33:11.635770 extend-filesystems[1440]: Found vda4 Jan 13 20:33:11.635770 extend-filesystems[1440]: Found vda6 Jan 13 20:33:11.635770 extend-filesystems[1440]: Found vda7 Jan 13 20:33:11.635770 extend-filesystems[1440]: Found vda9 Jan 13 20:33:11.635770 extend-filesystems[1440]: Checking size of /dev/vda9 Jan 13 20:33:11.770834 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 13 20:33:11.770905 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1385) Jan 13 20:33:11.770951 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 13 20:33:11.620703 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:33:11.771038 jq[1447]: true Jan 13 20:33:11.771141 extend-filesystems[1440]: Resized partition /dev/vda9 Jan 13 20:33:11.774153 update_engine[1446]: I20250113 20:33:11.643271 1446 main.cc:92] Flatcar Update Engine starting Jan 13 20:33:11.774153 update_engine[1446]: I20250113 20:33:11.648352 1446 update_check_scheduler.cc:74] Next update check in 11m28s Jan 13 20:33:11.620904 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:33:11.778513 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:33:11.778513 extend-filesystems[1471]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:33:11.778513 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:33:11.778513 extend-filesystems[1471]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 13 20:33:11.821620 jq[1464]: true Jan 13 20:33:11.648207 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:33:11.821934 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Jan 13 20:33:11.667753 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:33:11.675575 systemd-logind[1444]: New seat seat0. Jan 13 20:33:11.683455 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:33:11.779500 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:33:11.779521 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:33:11.780758 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:33:11.784037 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:33:11.784233 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:33:11.879169 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:33:11.881592 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:33:11.897852 systemd[1]: Starting sshkeys.service... Jan 13 20:33:11.920589 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:33:11.921282 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:33:11.949039 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:33:12.092597 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:33:12.131613 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:33:12.136920 containerd[1461]: time="2025-01-13T20:33:12.136820752Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:33:12.142731 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:33:12.154435 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:33:12.154662 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:33:12.165952 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:33:12.171934 containerd[1461]: time="2025-01-13T20:33:12.169981134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:12.171934 containerd[1461]: time="2025-01-13T20:33:12.171738370Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:12.171934 containerd[1461]: time="2025-01-13T20:33:12.171763748Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:33:12.171934 containerd[1461]: time="2025-01-13T20:33:12.171782863Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:33:12.172047 containerd[1461]: time="2025-01-13T20:33:12.171949907Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:33:12.172047 containerd[1461]: time="2025-01-13T20:33:12.171971127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:12.172047 containerd[1461]: time="2025-01-13T20:33:12.172038974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:12.172120 containerd[1461]: time="2025-01-13T20:33:12.172056306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:12.172506 containerd[1461]: time="2025-01-13T20:33:12.172279254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:12.172506 containerd[1461]: time="2025-01-13T20:33:12.172305965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:12.172506 containerd[1461]: time="2025-01-13T20:33:12.172322235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:12.172506 containerd[1461]: time="2025-01-13T20:33:12.172333947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:12.172506 containerd[1461]: time="2025-01-13T20:33:12.172417714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:12.172807 containerd[1461]: time="2025-01-13T20:33:12.172646954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:12.172807 containerd[1461]: time="2025-01-13T20:33:12.172782799Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:12.172807 containerd[1461]: time="2025-01-13T20:33:12.172804029Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:33:12.172932 containerd[1461]: time="2025-01-13T20:33:12.172894579Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:33:12.172982 containerd[1461]: time="2025-01-13T20:33:12.172961213Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:33:12.186209 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:33:12.191913 containerd[1461]: time="2025-01-13T20:33:12.191098129Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:33:12.191913 containerd[1461]: time="2025-01-13T20:33:12.191182628Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:33:12.191913 containerd[1461]: time="2025-01-13T20:33:12.191204068Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:33:12.191913 containerd[1461]: time="2025-01-13T20:33:12.191226119Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:33:12.191913 containerd[1461]: time="2025-01-13T20:33:12.191243823Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:33:12.191913 containerd[1461]: time="2025-01-13T20:33:12.191427707Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:33:12.191913 containerd[1461]: time="2025-01-13T20:33:12.191731788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:33:12.191913 containerd[1461]: time="2025-01-13T20:33:12.191850100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:33:12.191913 containerd[1461]: time="2025-01-13T20:33:12.191873393Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:33:12.192368 containerd[1461]: time="2025-01-13T20:33:12.191891457Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:33:12.192465 containerd[1461]: time="2025-01-13T20:33:12.192445055Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:33:12.192588 containerd[1461]: time="2025-01-13T20:33:12.192520647Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:33:12.192732 containerd[1461]: time="2025-01-13T20:33:12.192690436Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:33:12.192818 containerd[1461]: time="2025-01-13T20:33:12.192801424Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:33:12.192893 containerd[1461]: time="2025-01-13T20:33:12.192877286Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:33:12.192957 containerd[1461]: time="2025-01-13T20:33:12.192943120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:33:12.193022 containerd[1461]: time="2025-01-13T20:33:12.193007400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:33:12.193082 containerd[1461]: time="2025-01-13T20:33:12.193068996Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:33:12.193158 containerd[1461]: time="2025-01-13T20:33:12.193143365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.193224 containerd[1461]: time="2025-01-13T20:33:12.193210681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.193567 containerd[1461]: time="2025-01-13T20:33:12.193316720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.193567 containerd[1461]: time="2025-01-13T20:33:12.193340866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.193567 containerd[1461]: time="2025-01-13T20:33:12.193356465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.193567 containerd[1461]: time="2025-01-13T20:33:12.193374889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.193567 containerd[1461]: time="2025-01-13T20:33:12.193389587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.193567 containerd[1461]: time="2025-01-13T20:33:12.193404575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.193567 containerd[1461]: time="2025-01-13T20:33:12.193419413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.193567 containerd[1461]: time="2025-01-13T20:33:12.193436896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.193567 containerd[1461]: time="2025-01-13T20:33:12.193450261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.193567 containerd[1461]: time="2025-01-13T20:33:12.193464568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.193567 containerd[1461]: time="2025-01-13T20:33:12.193479976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.193567 containerd[1461]: time="2025-01-13T20:33:12.193497990Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:33:12.193567 containerd[1461]: time="2025-01-13T20:33:12.193527766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.194046 containerd[1461]: time="2025-01-13T20:33:12.193882321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.194046 containerd[1461]: time="2025-01-13T20:33:12.193904242Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:33:12.194706 containerd[1461]: time="2025-01-13T20:33:12.194686038Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:33:12.195568 containerd[1461]: time="2025-01-13T20:33:12.194779403Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:33:12.195568 containerd[1461]: time="2025-01-13T20:33:12.194799411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:33:12.195568 containerd[1461]: time="2025-01-13T20:33:12.194815371Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:33:12.195568 containerd[1461]: time="2025-01-13T20:33:12.194829487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.195568 containerd[1461]: time="2025-01-13T20:33:12.194844415Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:33:12.195568 containerd[1461]: time="2025-01-13T20:33:12.194856909Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:33:12.195568 containerd[1461]: time="2025-01-13T20:33:12.194867990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:33:12.195740 containerd[1461]: time="2025-01-13T20:33:12.195211704Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:33:12.195740 containerd[1461]: time="2025-01-13T20:33:12.195274031Z" level=info msg="Connect containerd service" Jan 13 20:33:12.195740 containerd[1461]: time="2025-01-13T20:33:12.195307324Z" level=info msg="using legacy CRI server" Jan 13 20:33:12.195740 containerd[1461]: time="2025-01-13T20:33:12.195315449Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:33:12.195740 containerd[1461]: time="2025-01-13T20:33:12.195442397Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:33:12.197056 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:33:12.201771 containerd[1461]: time="2025-01-13T20:33:12.200187987Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:33:12.201771 containerd[1461]: time="2025-01-13T20:33:12.200680511Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:33:12.201771 containerd[1461]: time="2025-01-13T20:33:12.200769277Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:33:12.201771 containerd[1461]: time="2025-01-13T20:33:12.201267141Z" level=info msg="Start subscribing containerd event" Jan 13 20:33:12.201771 containerd[1461]: time="2025-01-13T20:33:12.201343905Z" level=info msg="Start recovering state" Jan 13 20:33:12.201771 containerd[1461]: time="2025-01-13T20:33:12.201447740Z" level=info msg="Start event monitor" Jan 13 20:33:12.201771 containerd[1461]: time="2025-01-13T20:33:12.201497173Z" level=info msg="Start snapshots syncer" Jan 13 20:33:12.201771 containerd[1461]: time="2025-01-13T20:33:12.201511640Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:33:12.201771 containerd[1461]: time="2025-01-13T20:33:12.201520536Z" level=info msg="Start streaming server" Jan 13 20:33:12.201771 containerd[1461]: time="2025-01-13T20:33:12.201599124Z" level=info msg="containerd successfully booted in 0.065751s" Jan 13 20:33:12.209979 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:33:12.214638 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:33:12.217824 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:33:12.659888 systemd-networkd[1380]: eth0: Gained IPv6LL Jan 13 20:33:12.662914 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jan 13 20:33:12.665224 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:33:12.672863 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:33:12.695128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:12.701329 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:33:12.770104 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:33:13.741441 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:33:13.757149 systemd[1]: Started sshd@0-172.24.4.95:22-172.24.4.1:39150.service - OpenSSH per-connection server daemon (172.24.4.1:39150). Jan 13 20:33:15.088256 sshd[1538]: Accepted publickey for core from 172.24.4.1 port 39150 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:15.095784 sshd-session[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:15.123388 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:33:15.137325 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:33:15.150167 systemd-logind[1444]: New session 1 of user core. Jan 13 20:33:15.174327 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:33:15.185972 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:33:15.197573 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:33:15.253434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:15.257406 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:15.337272 systemd[1544]: Queued start job for default target default.target. Jan 13 20:33:15.348951 systemd[1544]: Created slice app.slice - User Application Slice. Jan 13 20:33:15.348981 systemd[1544]: Reached target paths.target - Paths. Jan 13 20:33:15.348998 systemd[1544]: Reached target timers.target - Timers. Jan 13 20:33:15.352688 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:33:15.361964 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:33:15.362869 systemd[1544]: Reached target sockets.target - Sockets. Jan 13 20:33:15.362887 systemd[1544]: Reached target basic.target - Basic System. Jan 13 20:33:15.362927 systemd[1544]: Reached target default.target - Main User Target. Jan 13 20:33:15.362955 systemd[1544]: Startup finished in 158ms. Jan 13 20:33:15.363250 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:33:15.380864 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:33:15.855327 systemd[1]: Started sshd@1-172.24.4.95:22-172.24.4.1:39154.service - OpenSSH per-connection server daemon (172.24.4.1:39154). Jan 13 20:33:16.514383 kubelet[1555]: E0113 20:33:16.514275 1555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:16.516044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:16.516389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:16.517200 systemd[1]: kubelet.service: Consumed 2.216s CPU time. Jan 13 20:33:17.178062 sshd[1565]: Accepted publickey for core from 172.24.4.1 port 39154 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:17.180843 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:17.193011 systemd-logind[1444]: New session 2 of user core. Jan 13 20:33:17.202005 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:33:17.242681 agetty[1521]: failed to open credentials directory Jan 13 20:33:17.243821 agetty[1523]: failed to open credentials directory Jan 13 20:33:17.260003 login[1521]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:33:17.278245 systemd-logind[1444]: New session 3 of user core. Jan 13 20:33:17.286459 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:33:17.287467 login[1523]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:33:17.303779 systemd-logind[1444]: New session 4 of user core. Jan 13 20:33:17.311059 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:33:17.954596 sshd[1570]: Connection closed by 172.24.4.1 port 39154 Jan 13 20:33:17.955665 sshd-session[1565]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:17.965733 systemd[1]: sshd@1-172.24.4.95:22-172.24.4.1:39154.service: Deactivated successfully. Jan 13 20:33:17.968202 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:33:17.970141 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:33:17.976345 systemd[1]: Started sshd@2-172.24.4.95:22-172.24.4.1:39164.service - OpenSSH per-connection server daemon (172.24.4.1:39164). Jan 13 20:33:17.979065 systemd-logind[1444]: Removed session 2. Jan 13 20:33:18.576041 coreos-metadata[1435]: Jan 13 20:33:18.575 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:33:18.624746 coreos-metadata[1435]: Jan 13 20:33:18.624 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 20:33:18.815955 coreos-metadata[1435]: Jan 13 20:33:18.815 INFO Fetch successful Jan 13 20:33:18.815955 coreos-metadata[1435]: Jan 13 20:33:18.815 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:33:18.833283 coreos-metadata[1435]: Jan 13 20:33:18.833 INFO Fetch successful Jan 13 20:33:18.833283 coreos-metadata[1435]: Jan 13 20:33:18.833 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 20:33:18.846964 coreos-metadata[1435]: Jan 13 20:33:18.846 INFO Fetch successful Jan 13 20:33:18.846964 coreos-metadata[1435]: Jan 13 20:33:18.846 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 20:33:18.861389 coreos-metadata[1435]: Jan 13 20:33:18.861 INFO Fetch successful Jan 13 20:33:18.861389 coreos-metadata[1435]: Jan 13 20:33:18.861 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 20:33:18.874112 coreos-metadata[1435]: Jan 13 20:33:18.874 INFO Fetch successful Jan 13 20:33:18.874112 coreos-metadata[1435]: Jan 13 20:33:18.874 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 20:33:18.887688 coreos-metadata[1435]: Jan 13 20:33:18.887 INFO Fetch successful Jan 13 20:33:18.943426 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:33:18.946130 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:33:19.065200 coreos-metadata[1497]: Jan 13 20:33:19.064 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:33:19.106687 coreos-metadata[1497]: Jan 13 20:33:19.106 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 20:33:19.119804 coreos-metadata[1497]: Jan 13 20:33:19.119 INFO Fetch successful Jan 13 20:33:19.119804 coreos-metadata[1497]: Jan 13 20:33:19.119 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:33:19.137343 coreos-metadata[1497]: Jan 13 20:33:19.137 INFO Fetch successful Jan 13 20:33:19.143317 unknown[1497]: wrote ssh authorized keys file for user: core Jan 13 20:33:19.181200 update-ssh-keys[1611]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:33:19.183682 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:33:19.188117 systemd[1]: Finished sshkeys.service. Jan 13 20:33:19.193310 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:33:19.194211 systemd[1]: Startup finished in 1.245s (kernel) + 15.399s (initrd) + 11.046s (userspace) = 27.691s. Jan 13 20:33:19.394500 sshd[1600]: Accepted publickey for core from 172.24.4.1 port 39164 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:19.396654 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:19.408184 systemd-logind[1444]: New session 5 of user core. Jan 13 20:33:19.414855 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:33:20.083792 sshd[1615]: Connection closed by 172.24.4.1 port 39164 Jan 13 20:33:20.083618 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:20.089149 systemd[1]: sshd@2-172.24.4.95:22-172.24.4.1:39164.service: Deactivated successfully. Jan 13 20:33:20.092668 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:33:20.096444 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:33:20.098979 systemd-logind[1444]: Removed session 5. Jan 13 20:33:26.564163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:33:26.571934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:26.918051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:26.934425 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:27.018873 kubelet[1627]: E0113 20:33:27.018777 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:27.025162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:27.025455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:30.113103 systemd[1]: Started sshd@3-172.24.4.95:22-172.24.4.1:59102.service - OpenSSH per-connection server daemon (172.24.4.1:59102). Jan 13 20:33:31.635406 sshd[1635]: Accepted publickey for core from 172.24.4.1 port 59102 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:31.638212 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:31.648310 systemd-logind[1444]: New session 6 of user core. Jan 13 20:33:31.667994 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:33:32.204012 sshd[1637]: Connection closed by 172.24.4.1 port 59102 Jan 13 20:33:32.205929 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:32.221917 systemd[1]: sshd@3-172.24.4.95:22-172.24.4.1:59102.service: Deactivated successfully. Jan 13 20:33:32.226307 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:33:32.230272 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:33:32.237168 systemd[1]: Started sshd@4-172.24.4.95:22-172.24.4.1:59114.service - OpenSSH per-connection server daemon (172.24.4.1:59114). Jan 13 20:33:32.240221 systemd-logind[1444]: Removed session 6. Jan 13 20:33:33.638614 sshd[1642]: Accepted publickey for core from 172.24.4.1 port 59114 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:33.641346 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:33.651463 systemd-logind[1444]: New session 7 of user core. Jan 13 20:33:33.660921 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:33:34.383590 sshd[1644]: Connection closed by 172.24.4.1 port 59114 Jan 13 20:33:34.384129 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:34.394272 systemd[1]: sshd@4-172.24.4.95:22-172.24.4.1:59114.service: Deactivated successfully. Jan 13 20:33:34.397868 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:33:34.399491 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:33:34.413229 systemd[1]: Started sshd@5-172.24.4.95:22-172.24.4.1:51958.service - OpenSSH per-connection server daemon (172.24.4.1:51958). Jan 13 20:33:34.415140 systemd-logind[1444]: Removed session 7. Jan 13 20:33:35.795873 sshd[1649]: Accepted publickey for core from 172.24.4.1 port 51958 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:35.798489 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:35.808034 systemd-logind[1444]: New session 8 of user core. Jan 13 20:33:35.820846 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:33:36.465961 sshd[1651]: Connection closed by 172.24.4.1 port 51958 Jan 13 20:33:36.465766 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:36.481911 systemd[1]: sshd@5-172.24.4.95:22-172.24.4.1:51958.service: Deactivated successfully. Jan 13 20:33:36.485863 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:33:36.489930 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:33:36.510179 systemd[1]: Started sshd@6-172.24.4.95:22-172.24.4.1:51974.service - OpenSSH per-connection server daemon (172.24.4.1:51974). Jan 13 20:33:36.514359 systemd-logind[1444]: Removed session 8. Jan 13 20:33:37.064137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:33:37.073115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:37.437839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:37.448093 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:37.543881 kubelet[1666]: E0113 20:33:37.543820 1666 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:37.546054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:37.546388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:37.698179 sshd[1656]: Accepted publickey for core from 172.24.4.1 port 51974 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:37.700337 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:37.712158 systemd-logind[1444]: New session 9 of user core. Jan 13 20:33:37.721882 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:33:38.018896 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:33:38.020314 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:33:38.043370 sudo[1675]: pam_unix(sudo:session): session closed for user root Jan 13 20:33:38.252591 sshd[1674]: Connection closed by 172.24.4.1 port 51974 Jan 13 20:33:38.253963 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:38.264618 systemd[1]: sshd@6-172.24.4.95:22-172.24.4.1:51974.service: Deactivated successfully. Jan 13 20:33:38.268344 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:33:38.270470 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:33:38.285177 systemd[1]: Started sshd@7-172.24.4.95:22-172.24.4.1:51982.service - OpenSSH per-connection server daemon (172.24.4.1:51982). Jan 13 20:33:38.286983 systemd-logind[1444]: Removed session 9. Jan 13 20:33:39.429022 sshd[1680]: Accepted publickey for core from 172.24.4.1 port 51982 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:39.432771 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:39.452173 systemd-logind[1444]: New session 10 of user core. Jan 13 20:33:39.462897 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:33:39.969820 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:33:39.971230 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:33:39.978927 sudo[1684]: pam_unix(sudo:session): session closed for user root Jan 13 20:33:39.990933 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:33:39.991638 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:33:40.016253 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:33:40.090749 augenrules[1706]: No rules Jan 13 20:33:40.092392 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:33:40.092873 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:33:40.094916 sudo[1683]: pam_unix(sudo:session): session closed for user root Jan 13 20:33:40.342328 sshd[1682]: Connection closed by 172.24.4.1 port 51982 Jan 13 20:33:40.344882 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:40.355265 systemd[1]: sshd@7-172.24.4.95:22-172.24.4.1:51982.service: Deactivated successfully. Jan 13 20:33:40.358187 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:33:40.360995 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:33:40.368124 systemd[1]: Started sshd@8-172.24.4.95:22-172.24.4.1:51994.service - OpenSSH per-connection server daemon (172.24.4.1:51994). Jan 13 20:33:40.372306 systemd-logind[1444]: Removed session 10. Jan 13 20:33:41.860265 sshd[1714]: Accepted publickey for core from 172.24.4.1 port 51994 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:41.864241 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:41.878012 systemd-logind[1444]: New session 11 of user core. Jan 13 20:33:41.885848 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:33:42.406520 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:33:42.407273 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:33:42.847710 systemd-timesyncd[1352]: Contacted time server 95.81.173.8:123 (2.flatcar.pool.ntp.org). Jan 13 20:33:42.847848 systemd-timesyncd[1352]: Initial clock synchronization to Mon 2025-01-13 20:33:43.182696 UTC. Jan 13 20:33:44.227563 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:44.242308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:44.302594 systemd[1]: Reloading requested from client PID 1749 ('systemctl') (unit session-11.scope)... Jan 13 20:33:44.302611 systemd[1]: Reloading... Jan 13 20:33:44.391598 zram_generator::config[1783]: No configuration found. Jan 13 20:33:44.556396 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:33:44.642874 systemd[1]: Reloading finished in 339 ms. Jan 13 20:33:44.690596 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:33:44.690670 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:33:44.691022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:44.693245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:44.804999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:44.808643 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:33:45.081238 kubelet[1852]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:33:45.081238 kubelet[1852]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:33:45.081238 kubelet[1852]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:33:45.082019 kubelet[1852]: I0113 20:33:45.081351 1852 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:33:45.836667 kubelet[1852]: I0113 20:33:45.836537 1852 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:33:45.836667 kubelet[1852]: I0113 20:33:45.836590 1852 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:33:45.836981 kubelet[1852]: I0113 20:33:45.836870 1852 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:33:45.866251 kubelet[1852]: I0113 20:33:45.866191 1852 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:33:45.904371 kubelet[1852]: E0113 20:33:45.904305 1852 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:33:45.904371 kubelet[1852]: I0113 20:33:45.904342 1852 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:33:45.909272 kubelet[1852]: I0113 20:33:45.909232 1852 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:33:45.909577 kubelet[1852]: I0113 20:33:45.909347 1852 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:33:45.909577 kubelet[1852]: I0113 20:33:45.909461 1852 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:33:45.909750 kubelet[1852]: I0113 20:33:45.909487 1852 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:33:45.909750 kubelet[1852]: I0113 20:33:45.909702 1852 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:33:45.909750 kubelet[1852]: I0113 20:33:45.909715 1852 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:33:45.910090 kubelet[1852]: I0113 20:33:45.909822 1852 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:33:45.914483 kubelet[1852]: I0113 20:33:45.914419 1852 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:33:45.914483 kubelet[1852]: I0113 20:33:45.914450 1852 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:33:45.914483 kubelet[1852]: I0113 20:33:45.914483 1852 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:33:45.914751 kubelet[1852]: I0113 20:33:45.914504 1852 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:33:45.924623 kubelet[1852]: E0113 20:33:45.924230 1852 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:45.924623 kubelet[1852]: E0113 20:33:45.924336 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:45.926481 kubelet[1852]: I0113 20:33:45.926435 1852 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:33:45.931195 kubelet[1852]: I0113 20:33:45.931140 1852 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:33:45.931330 kubelet[1852]: W0113 20:33:45.931286 1852 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:33:45.932843 kubelet[1852]: I0113 20:33:45.932696 1852 server.go:1269] "Started kubelet" Jan 13 20:33:45.936179 kubelet[1852]: I0113 20:33:45.935835 1852 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:33:45.942775 kubelet[1852]: I0113 20:33:45.942740 1852 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:33:45.944092 kubelet[1852]: I0113 20:33:45.943872 1852 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:33:45.944941 kubelet[1852]: I0113 20:33:45.944894 1852 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:33:45.945200 kubelet[1852]: I0113 20:33:45.945185 1852 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:33:45.945937 kubelet[1852]: I0113 20:33:45.945455 1852 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:33:45.946553 kubelet[1852]: W0113 20:33:45.946535 1852 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.95" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:33:45.946688 kubelet[1852]: E0113 20:33:45.946671 1852 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.24.4.95\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 20:33:45.947252 kubelet[1852]: I0113 20:33:45.947102 1852 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:33:45.948360 kubelet[1852]: E0113 20:33:45.947830 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:45.948541 kubelet[1852]: I0113 20:33:45.948500 1852 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:33:45.948812 kubelet[1852]: I0113 20:33:45.948775 1852 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:33:45.951186 kubelet[1852]: E0113 20:33:45.946775 1852 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.95.181a5ac911762648 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.95,UID:172.24.4.95,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.24.4.95,},FirstTimestamp:2025-01-13 20:33:45.932629576 +0000 UTC m=+1.115200224,LastTimestamp:2025-01-13 20:33:45.932629576 +0000 UTC m=+1.115200224,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.95,}" Jan 13 20:33:45.951186 kubelet[1852]: W0113 20:33:45.950121 1852 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:33:45.951186 kubelet[1852]: E0113 20:33:45.950144 1852 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 20:33:45.952868 kubelet[1852]: I0113 20:33:45.952849 1852 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:33:45.953022 kubelet[1852]: I0113 20:33:45.953003 1852 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:33:45.954817 kubelet[1852]: I0113 20:33:45.954801 1852 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:33:45.973062 kubelet[1852]: W0113 20:33:45.972114 1852 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:33:45.973062 kubelet[1852]: E0113 20:33:45.972223 1852 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 13 20:33:45.974233 kubelet[1852]: E0113 20:33:45.973824 1852 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.95\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 20:33:45.974609 kubelet[1852]: E0113 20:33:45.974488 1852 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:33:46.002024 kubelet[1852]: I0113 20:33:46.001710 1852 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:33:46.002024 kubelet[1852]: I0113 20:33:46.001733 1852 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:33:46.002024 kubelet[1852]: I0113 20:33:46.001751 1852 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:33:46.030852 kubelet[1852]: I0113 20:33:46.030730 1852 policy_none.go:49] "None policy: Start" Jan 13 20:33:46.032275 kubelet[1852]: I0113 20:33:46.032248 1852 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:33:46.032559 kubelet[1852]: I0113 20:33:46.032402 1852 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:33:46.048219 kubelet[1852]: E0113 20:33:46.048144 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:46.049182 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:33:46.068321 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:33:46.081362 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:33:46.090632 kubelet[1852]: I0113 20:33:46.090522 1852 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:33:46.095749 kubelet[1852]: I0113 20:33:46.094994 1852 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:33:46.095749 kubelet[1852]: I0113 20:33:46.095047 1852 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:33:46.095749 kubelet[1852]: I0113 20:33:46.095546 1852 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:33:46.097707 kubelet[1852]: E0113 20:33:46.097647 1852 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.95\" not found" Jan 13 20:33:46.103858 kubelet[1852]: I0113 20:33:46.103813 1852 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:33:46.106826 kubelet[1852]: I0113 20:33:46.106803 1852 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:33:46.106997 kubelet[1852]: I0113 20:33:46.106981 1852 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:33:46.107141 kubelet[1852]: I0113 20:33:46.107126 1852 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:33:46.107321 kubelet[1852]: E0113 20:33:46.107284 1852 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 13 20:33:46.181443 kubelet[1852]: E0113 20:33:46.181343 1852 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.95\" not found" node="172.24.4.95" Jan 13 20:33:46.197694 kubelet[1852]: I0113 20:33:46.197098 1852 kubelet_node_status.go:72] "Attempting to register node" node="172.24.4.95" Jan 13 20:33:46.228126 kubelet[1852]: I0113 20:33:46.227988 1852 kubelet_node_status.go:75] "Successfully registered node" node="172.24.4.95" Jan 13 20:33:46.228126 kubelet[1852]: E0113 20:33:46.228055 1852 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.24.4.95\": node \"172.24.4.95\" not found" Jan 13 20:33:46.264132 kubelet[1852]: E0113 20:33:46.264072 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:46.365214 kubelet[1852]: E0113 20:33:46.365014 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:46.369873 sudo[1717]: pam_unix(sudo:session): session closed for user root Jan 13 20:33:46.466305 kubelet[1852]: E0113 20:33:46.466175 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:46.566827 kubelet[1852]: E0113 20:33:46.566729 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:46.575629 sshd[1716]: Connection closed by 172.24.4.1 port 51994 Jan 13 20:33:46.576876 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:46.585112 systemd[1]: sshd@8-172.24.4.95:22-172.24.4.1:51994.service: Deactivated successfully. Jan 13 20:33:46.590151 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:33:46.590848 systemd[1]: session-11.scope: Consumed 1.190s CPU time, 75.5M memory peak, 0B memory swap peak. Jan 13 20:33:46.593635 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:33:46.597119 systemd-logind[1444]: Removed session 11. Jan 13 20:33:46.668099 kubelet[1852]: E0113 20:33:46.667940 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:46.769103 kubelet[1852]: E0113 20:33:46.769008 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:46.840745 kubelet[1852]: I0113 20:33:46.840350 1852 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:33:46.840745 kubelet[1852]: W0113 20:33:46.840679 1852 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:33:46.869743 kubelet[1852]: E0113 20:33:46.869644 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:46.925178 kubelet[1852]: E0113 20:33:46.925010 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:46.970323 kubelet[1852]: E0113 20:33:46.970255 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:47.070584 kubelet[1852]: E0113 20:33:47.070467 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:47.171482 kubelet[1852]: E0113 20:33:47.171392 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:47.272539 kubelet[1852]: E0113 20:33:47.272320 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:47.373105 kubelet[1852]: E0113 20:33:47.373025 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.95\" not found" Jan 13 20:33:47.474647 kubelet[1852]: I0113 20:33:47.474543 1852 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:33:47.475412 containerd[1461]: time="2025-01-13T20:33:47.475101270Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:33:47.476832 kubelet[1852]: I0113 20:33:47.476570 1852 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:33:47.926007 kubelet[1852]: I0113 20:33:47.925817 1852 apiserver.go:52] "Watching apiserver" Jan 13 20:33:47.926587 kubelet[1852]: E0113 20:33:47.926471 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:47.951522 kubelet[1852]: I0113 20:33:47.951067 1852 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:33:47.956355 systemd[1]: Created slice kubepods-besteffort-pod4d7f0793_04bb_47a5_9aef_9c56bd6ad00a.slice - libcontainer container kubepods-besteffort-pod4d7f0793_04bb_47a5_9aef_9c56bd6ad00a.slice. Jan 13 20:33:47.962605 kubelet[1852]: I0113 20:33:47.962299 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-host-proc-sys-net\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.962605 kubelet[1852]: I0113 20:33:47.962406 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-cni-path\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.962605 kubelet[1852]: I0113 20:33:47.962495 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-etc-cni-netd\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.963280 kubelet[1852]: I0113 20:33:47.962758 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-xtables-lock\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.963280 kubelet[1852]: I0113 20:33:47.963192 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-host-proc-sys-kernel\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.963692 kubelet[1852]: I0113 20:33:47.963243 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0be17581-ef17-4d81-94ed-1be5d323db9d-hubble-tls\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.964032 kubelet[1852]: I0113 20:33:47.963663 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d7f0793-04bb-47a5-9aef-9c56bd6ad00a-xtables-lock\") pod \"kube-proxy-bh5gk\" (UID: \"4d7f0793-04bb-47a5-9aef-9c56bd6ad00a\") " pod="kube-system/kube-proxy-bh5gk" Jan 13 20:33:47.964032 kubelet[1852]: I0113 20:33:47.963950 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-cilium-run\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.964418 kubelet[1852]: I0113 20:33:47.964228 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-bpf-maps\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.964418 kubelet[1852]: I0113 20:33:47.964354 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-lib-modules\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.965624 kubelet[1852]: I0113 20:33:47.964690 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv5mq\" (UniqueName: \"kubernetes.io/projected/0be17581-ef17-4d81-94ed-1be5d323db9d-kube-api-access-tv5mq\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.965624 kubelet[1852]: I0113 20:33:47.964801 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4d7f0793-04bb-47a5-9aef-9c56bd6ad00a-kube-proxy\") pod \"kube-proxy-bh5gk\" (UID: \"4d7f0793-04bb-47a5-9aef-9c56bd6ad00a\") " pod="kube-system/kube-proxy-bh5gk" Jan 13 20:33:47.965624 kubelet[1852]: I0113 20:33:47.964873 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4469\" (UniqueName: \"kubernetes.io/projected/4d7f0793-04bb-47a5-9aef-9c56bd6ad00a-kube-api-access-z4469\") pod \"kube-proxy-bh5gk\" (UID: \"4d7f0793-04bb-47a5-9aef-9c56bd6ad00a\") " pod="kube-system/kube-proxy-bh5gk" Jan 13 20:33:47.965624 kubelet[1852]: I0113 20:33:47.964916 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-hostproc\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.965624 kubelet[1852]: I0113 20:33:47.964971 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-cilium-cgroup\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.966106 kubelet[1852]: I0113 20:33:47.965040 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0be17581-ef17-4d81-94ed-1be5d323db9d-cilium-config-path\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.966106 kubelet[1852]: I0113 20:33:47.965110 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0be17581-ef17-4d81-94ed-1be5d323db9d-clustermesh-secrets\") pod \"cilium-bxxmb\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " pod="kube-system/cilium-bxxmb" Jan 13 20:33:47.966106 kubelet[1852]: I0113 20:33:47.965151 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d7f0793-04bb-47a5-9aef-9c56bd6ad00a-lib-modules\") pod \"kube-proxy-bh5gk\" (UID: \"4d7f0793-04bb-47a5-9aef-9c56bd6ad00a\") " pod="kube-system/kube-proxy-bh5gk" Jan 13 20:33:47.983315 systemd[1]: Created slice kubepods-burstable-pod0be17581_ef17_4d81_94ed_1be5d323db9d.slice - libcontainer container kubepods-burstable-pod0be17581_ef17_4d81_94ed_1be5d323db9d.slice. Jan 13 20:33:48.283009 containerd[1461]: time="2025-01-13T20:33:48.281491311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bh5gk,Uid:4d7f0793-04bb-47a5-9aef-9c56bd6ad00a,Namespace:kube-system,Attempt:0,}" Jan 13 20:33:48.295651 containerd[1461]: time="2025-01-13T20:33:48.295495686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxxmb,Uid:0be17581-ef17-4d81-94ed-1be5d323db9d,Namespace:kube-system,Attempt:0,}" Jan 13 20:33:48.927326 kubelet[1852]: E0113 20:33:48.927187 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:49.022088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2614628263.mount: Deactivated successfully. Jan 13 20:33:49.034077 containerd[1461]: time="2025-01-13T20:33:49.033854521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:33:49.036626 containerd[1461]: time="2025-01-13T20:33:49.036583451Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 20:33:49.038946 containerd[1461]: time="2025-01-13T20:33:49.038896523Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:33:49.044121 containerd[1461]: time="2025-01-13T20:33:49.044083838Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:33:49.045077 containerd[1461]: time="2025-01-13T20:33:49.044837652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:33:49.046878 containerd[1461]: time="2025-01-13T20:33:49.046701028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:33:49.048720 containerd[1461]: time="2025-01-13T20:33:49.048432331Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 752.690198ms" Jan 13 20:33:49.052248 containerd[1461]: time="2025-01-13T20:33:49.052169433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 770.393455ms" Jan 13 20:33:49.265698 containerd[1461]: time="2025-01-13T20:33:49.265317968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:33:49.265698 containerd[1461]: time="2025-01-13T20:33:49.265386203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:33:49.265698 containerd[1461]: time="2025-01-13T20:33:49.265407333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:33:49.266162 containerd[1461]: time="2025-01-13T20:33:49.265493595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:33:49.284450 containerd[1461]: time="2025-01-13T20:33:49.283927525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:33:49.284450 containerd[1461]: time="2025-01-13T20:33:49.283991444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:33:49.284450 containerd[1461]: time="2025-01-13T20:33:49.284010164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:33:49.284450 containerd[1461]: time="2025-01-13T20:33:49.284099795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:33:49.382764 systemd[1]: Started cri-containerd-1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3.scope - libcontainer container 1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3. Jan 13 20:33:49.388356 systemd[1]: Started cri-containerd-f8a6316698c8303920100f350f4453fc2458bdc9c40a7ef450574103d4e05249.scope - libcontainer container f8a6316698c8303920100f350f4453fc2458bdc9c40a7ef450574103d4e05249. Jan 13 20:33:49.421403 containerd[1461]: time="2025-01-13T20:33:49.421356986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxxmb,Uid:0be17581-ef17-4d81-94ed-1be5d323db9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\"" Jan 13 20:33:49.428922 containerd[1461]: time="2025-01-13T20:33:49.428606639Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:33:49.428922 containerd[1461]: time="2025-01-13T20:33:49.428883799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bh5gk,Uid:4d7f0793-04bb-47a5-9aef-9c56bd6ad00a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8a6316698c8303920100f350f4453fc2458bdc9c40a7ef450574103d4e05249\"" Jan 13 20:33:49.928412 kubelet[1852]: E0113 20:33:49.928201 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:50.082339 systemd[1]: run-containerd-runc-k8s.io-1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3-runc.Tqp68y.mount: Deactivated successfully. Jan 13 20:33:50.928955 kubelet[1852]: E0113 20:33:50.928880 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:51.930573 kubelet[1852]: E0113 20:33:51.929999 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:52.930588 kubelet[1852]: E0113 20:33:52.930511 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:53.931062 kubelet[1852]: E0113 20:33:53.930965 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:54.800600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569863845.mount: Deactivated successfully. Jan 13 20:33:54.931170 kubelet[1852]: E0113 20:33:54.931108 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:55.931482 kubelet[1852]: E0113 20:33:55.931429 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:56.510116 update_engine[1446]: I20250113 20:33:56.510051 1446 update_attempter.cc:509] Updating boot flags... Jan 13 20:33:56.551732 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2012) Jan 13 20:33:56.636667 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2010) Jan 13 20:33:56.931908 kubelet[1852]: E0113 20:33:56.931857 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:57.176618 containerd[1461]: time="2025-01-13T20:33:57.176503748Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:57.178701 containerd[1461]: time="2025-01-13T20:33:57.178339163Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735327" Jan 13 20:33:57.180226 containerd[1461]: time="2025-01-13T20:33:57.180128677Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:57.183346 containerd[1461]: time="2025-01-13T20:33:57.183215313Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.754562211s" Jan 13 20:33:57.183346 containerd[1461]: time="2025-01-13T20:33:57.183258683Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:33:57.186923 containerd[1461]: time="2025-01-13T20:33:57.186628387Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:33:57.187157 containerd[1461]: time="2025-01-13T20:33:57.187095398Z" level=info msg="CreateContainer within sandbox \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:33:57.216659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024248959.mount: Deactivated successfully. Jan 13 20:33:57.222069 containerd[1461]: time="2025-01-13T20:33:57.222035899Z" level=info msg="CreateContainer within sandbox \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670\"" Jan 13 20:33:57.223680 containerd[1461]: time="2025-01-13T20:33:57.223367174Z" level=info msg="StartContainer for \"46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670\"" Jan 13 20:33:57.265721 systemd[1]: Started cri-containerd-46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670.scope - libcontainer container 46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670. Jan 13 20:33:57.296505 containerd[1461]: time="2025-01-13T20:33:57.296370276Z" level=info msg="StartContainer for \"46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670\" returns successfully" Jan 13 20:33:57.308201 systemd[1]: cri-containerd-46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670.scope: Deactivated successfully. Jan 13 20:33:57.932630 kubelet[1852]: E0113 20:33:57.932559 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:58.209758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670-rootfs.mount: Deactivated successfully. Jan 13 20:33:58.933692 kubelet[1852]: E0113 20:33:58.933581 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:33:58.964885 containerd[1461]: time="2025-01-13T20:33:58.964766064Z" level=info msg="shim disconnected" id=46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670 namespace=k8s.io Jan 13 20:33:58.966445 containerd[1461]: time="2025-01-13T20:33:58.965403631Z" level=warning msg="cleaning up after shim disconnected" id=46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670 namespace=k8s.io Jan 13 20:33:58.966445 containerd[1461]: time="2025-01-13T20:33:58.965522853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:33:59.172358 containerd[1461]: time="2025-01-13T20:33:59.171352686Z" level=info msg="CreateContainer within sandbox \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:33:59.205080 containerd[1461]: time="2025-01-13T20:33:59.204917906Z" level=info msg="CreateContainer within sandbox \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb\"" Jan 13 20:33:59.206148 containerd[1461]: time="2025-01-13T20:33:59.206101254Z" level=info msg="StartContainer for \"8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb\"" Jan 13 20:33:59.281743 systemd[1]: Started cri-containerd-8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb.scope - libcontainer container 8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb. Jan 13 20:33:59.317426 containerd[1461]: time="2025-01-13T20:33:59.317073321Z" level=info msg="StartContainer for \"8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb\" returns successfully" Jan 13 20:33:59.329008 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:33:59.329822 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:33:59.329894 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:33:59.337986 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:33:59.338288 systemd[1]: cri-containerd-8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb.scope: Deactivated successfully. Jan 13 20:33:59.371234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:33:59.396650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb-rootfs.mount: Deactivated successfully. Jan 13 20:33:59.428349 containerd[1461]: time="2025-01-13T20:33:59.428286393Z" level=info msg="shim disconnected" id=8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb namespace=k8s.io Jan 13 20:33:59.428349 containerd[1461]: time="2025-01-13T20:33:59.428339967Z" level=warning msg="cleaning up after shim disconnected" id=8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb namespace=k8s.io Jan 13 20:33:59.428349 containerd[1461]: time="2025-01-13T20:33:59.428350882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:33:59.933977 kubelet[1852]: E0113 20:33:59.933918 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:00.114703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1877672555.mount: Deactivated successfully. Jan 13 20:34:00.174712 containerd[1461]: time="2025-01-13T20:34:00.174653782Z" level=info msg="CreateContainer within sandbox \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:34:00.214934 containerd[1461]: time="2025-01-13T20:34:00.213857180Z" level=info msg="CreateContainer within sandbox \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b\"" Jan 13 20:34:00.216298 containerd[1461]: time="2025-01-13T20:34:00.216248421Z" level=info msg="StartContainer for \"45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b\"" Jan 13 20:34:00.286834 systemd[1]: Started cri-containerd-45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b.scope - libcontainer container 45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b. Jan 13 20:34:00.356375 containerd[1461]: time="2025-01-13T20:34:00.356010686Z" level=info msg="StartContainer for \"45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b\" returns successfully" Jan 13 20:34:00.356678 systemd[1]: cri-containerd-45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b.scope: Deactivated successfully. Jan 13 20:34:00.398233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b-rootfs.mount: Deactivated successfully. Jan 13 20:34:00.620564 containerd[1461]: time="2025-01-13T20:34:00.620450058Z" level=info msg="shim disconnected" id=45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b namespace=k8s.io Jan 13 20:34:00.620564 containerd[1461]: time="2025-01-13T20:34:00.620502904Z" level=warning msg="cleaning up after shim disconnected" id=45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b namespace=k8s.io Jan 13 20:34:00.620564 containerd[1461]: time="2025-01-13T20:34:00.620513047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:00.935260 kubelet[1852]: E0113 20:34:00.934601 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:00.962274 containerd[1461]: time="2025-01-13T20:34:00.962213853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:00.963510 containerd[1461]: time="2025-01-13T20:34:00.963344361Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230251" Jan 13 20:34:00.964670 containerd[1461]: time="2025-01-13T20:34:00.964603475Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:00.967080 containerd[1461]: time="2025-01-13T20:34:00.967038768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:00.968024 containerd[1461]: time="2025-01-13T20:34:00.967712162Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 3.78104969s" Jan 13 20:34:00.968024 containerd[1461]: time="2025-01-13T20:34:00.967749281Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 20:34:00.970043 containerd[1461]: time="2025-01-13T20:34:00.969923501Z" level=info msg="CreateContainer within sandbox \"f8a6316698c8303920100f350f4453fc2458bdc9c40a7ef450574103d4e05249\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:34:00.990600 containerd[1461]: time="2025-01-13T20:34:00.990561243Z" level=info msg="CreateContainer within sandbox \"f8a6316698c8303920100f350f4453fc2458bdc9c40a7ef450574103d4e05249\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"151f4bf664153ff5f645ac96b49f6e545c736676b02de1a0b90d16a491ddd7b1\"" Jan 13 20:34:00.993046 containerd[1461]: time="2025-01-13T20:34:00.991428112Z" level=info msg="StartContainer for \"151f4bf664153ff5f645ac96b49f6e545c736676b02de1a0b90d16a491ddd7b1\"" Jan 13 20:34:01.022701 systemd[1]: Started cri-containerd-151f4bf664153ff5f645ac96b49f6e545c736676b02de1a0b90d16a491ddd7b1.scope - libcontainer container 151f4bf664153ff5f645ac96b49f6e545c736676b02de1a0b90d16a491ddd7b1. Jan 13 20:34:01.056812 containerd[1461]: time="2025-01-13T20:34:01.056765636Z" level=info msg="StartContainer for \"151f4bf664153ff5f645ac96b49f6e545c736676b02de1a0b90d16a491ddd7b1\" returns successfully" Jan 13 20:34:01.185146 containerd[1461]: time="2025-01-13T20:34:01.185083410Z" level=info msg="CreateContainer within sandbox \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:34:01.219764 containerd[1461]: time="2025-01-13T20:34:01.218987700Z" level=info msg="CreateContainer within sandbox \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb\"" Jan 13 20:34:01.221239 containerd[1461]: time="2025-01-13T20:34:01.220338777Z" level=info msg="StartContainer for \"090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb\"" Jan 13 20:34:01.240096 kubelet[1852]: I0113 20:34:01.234578 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bh5gk" podStartSLOduration=3.696379516 podStartE2EDuration="15.23451726s" podCreationTimestamp="2025-01-13 20:33:46 +0000 UTC" firstStartedPulling="2025-01-13 20:33:49.4303822 +0000 UTC m=+4.612952816" lastFinishedPulling="2025-01-13 20:34:00.968519954 +0000 UTC m=+16.151090560" observedRunningTime="2025-01-13 20:34:01.191135727 +0000 UTC m=+16.373706333" watchObservedRunningTime="2025-01-13 20:34:01.23451726 +0000 UTC m=+16.417087885" Jan 13 20:34:01.240431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216276006.mount: Deactivated successfully. Jan 13 20:34:01.273723 systemd[1]: Started cri-containerd-090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb.scope - libcontainer container 090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb. Jan 13 20:34:01.298593 systemd[1]: cri-containerd-090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb.scope: Deactivated successfully. Jan 13 20:34:01.301942 containerd[1461]: time="2025-01-13T20:34:01.301915745Z" level=info msg="StartContainer for \"090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb\" returns successfully" Jan 13 20:34:01.318770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb-rootfs.mount: Deactivated successfully. Jan 13 20:34:01.531349 containerd[1461]: time="2025-01-13T20:34:01.529745093Z" level=info msg="shim disconnected" id=090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb namespace=k8s.io Jan 13 20:34:01.531349 containerd[1461]: time="2025-01-13T20:34:01.529878578Z" level=warning msg="cleaning up after shim disconnected" id=090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb namespace=k8s.io Jan 13 20:34:01.531349 containerd[1461]: time="2025-01-13T20:34:01.529912700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:01.935718 kubelet[1852]: E0113 20:34:01.935616 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:02.196286 containerd[1461]: time="2025-01-13T20:34:02.195846848Z" level=info msg="CreateContainer within sandbox \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:34:02.237591 containerd[1461]: time="2025-01-13T20:34:02.237474197Z" level=info msg="CreateContainer within sandbox \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\"" Jan 13 20:34:02.239078 containerd[1461]: time="2025-01-13T20:34:02.238940456Z" level=info msg="StartContainer for \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\"" Jan 13 20:34:02.295729 systemd[1]: Started cri-containerd-2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7.scope - libcontainer container 2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7. Jan 13 20:34:02.329245 containerd[1461]: time="2025-01-13T20:34:02.329193574Z" level=info msg="StartContainer for \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\" returns successfully" Jan 13 20:34:02.500693 kubelet[1852]: I0113 20:34:02.499927 1852 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:34:02.768583 kernel: Initializing XFRM netlink socket Jan 13 20:34:02.936781 kubelet[1852]: E0113 20:34:02.936677 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:03.246593 kubelet[1852]: I0113 20:34:03.246075 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bxxmb" podStartSLOduration=9.487082584 podStartE2EDuration="17.246044406s" podCreationTimestamp="2025-01-13 20:33:46 +0000 UTC" firstStartedPulling="2025-01-13 20:33:49.426321453 +0000 UTC m=+4.608892059" lastFinishedPulling="2025-01-13 20:33:57.185283286 +0000 UTC m=+12.367853881" observedRunningTime="2025-01-13 20:34:03.241072578 +0000 UTC m=+18.423643293" watchObservedRunningTime="2025-01-13 20:34:03.246044406 +0000 UTC m=+18.428615001" Jan 13 20:34:03.937742 kubelet[1852]: E0113 20:34:03.937621 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:04.502006 systemd-networkd[1380]: cilium_host: Link UP Jan 13 20:34:04.502630 systemd-networkd[1380]: cilium_net: Link UP Jan 13 20:34:04.504701 systemd-networkd[1380]: cilium_net: Gained carrier Jan 13 20:34:04.505239 systemd-networkd[1380]: cilium_host: Gained carrier Jan 13 20:34:04.646586 systemd-networkd[1380]: cilium_vxlan: Link UP Jan 13 20:34:04.646594 systemd-networkd[1380]: cilium_vxlan: Gained carrier Jan 13 20:34:04.938381 kubelet[1852]: E0113 20:34:04.938278 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:04.971704 kernel: NET: Registered PF_ALG protocol family Jan 13 20:34:05.075892 systemd-networkd[1380]: cilium_host: Gained IPv6LL Jan 13 20:34:05.139801 systemd-networkd[1380]: cilium_net: Gained IPv6LL Jan 13 20:34:05.674907 systemd[1]: Created slice kubepods-besteffort-pod4b8366cb_72c2_4662_9701_15f8f460c66b.slice - libcontainer container kubepods-besteffort-pod4b8366cb_72c2_4662_9701_15f8f460c66b.slice. Jan 13 20:34:05.689463 kubelet[1852]: I0113 20:34:05.689371 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2slw\" (UniqueName: \"kubernetes.io/projected/4b8366cb-72c2-4662-9701-15f8f460c66b-kube-api-access-m2slw\") pod \"nginx-deployment-8587fbcb89-n44x4\" (UID: \"4b8366cb-72c2-4662-9701-15f8f460c66b\") " pod="default/nginx-deployment-8587fbcb89-n44x4" Jan 13 20:34:05.867486 systemd-networkd[1380]: lxc_health: Link UP Jan 13 20:34:05.876184 systemd-networkd[1380]: lxc_health: Gained carrier Jan 13 20:34:05.915646 kubelet[1852]: E0113 20:34:05.915607 1852 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:05.938606 kubelet[1852]: E0113 20:34:05.938448 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:05.983882 containerd[1461]: time="2025-01-13T20:34:05.982964803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-n44x4,Uid:4b8366cb-72c2-4662-9701-15f8f460c66b,Namespace:default,Attempt:0,}" Jan 13 20:34:06.047231 systemd-networkd[1380]: lxc859d7be3b0e7: Link UP Jan 13 20:34:06.054611 kernel: eth0: renamed from tmp7094b Jan 13 20:34:06.060927 systemd-networkd[1380]: lxc859d7be3b0e7: Gained carrier Jan 13 20:34:06.355791 systemd-networkd[1380]: cilium_vxlan: Gained IPv6LL Jan 13 20:34:06.939268 kubelet[1852]: E0113 20:34:06.939200 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:07.252312 systemd-networkd[1380]: lxc_health: Gained IPv6LL Jan 13 20:34:07.940246 kubelet[1852]: E0113 20:34:07.940170 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:08.083906 systemd-networkd[1380]: lxc859d7be3b0e7: Gained IPv6LL Jan 13 20:34:08.940423 kubelet[1852]: E0113 20:34:08.940350 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:09.941366 kubelet[1852]: E0113 20:34:09.941225 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:10.445789 containerd[1461]: time="2025-01-13T20:34:10.445317303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:10.445789 containerd[1461]: time="2025-01-13T20:34:10.445383051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:10.445789 containerd[1461]: time="2025-01-13T20:34:10.445402409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:10.445789 containerd[1461]: time="2025-01-13T20:34:10.445483975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:10.478782 systemd[1]: Started cri-containerd-7094b0b920c6cdd00e36fb6d5af982302dd8c8d5ace8ac1b7c105802b352b2e0.scope - libcontainer container 7094b0b920c6cdd00e36fb6d5af982302dd8c8d5ace8ac1b7c105802b352b2e0. Jan 13 20:34:10.519442 containerd[1461]: time="2025-01-13T20:34:10.519409814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-n44x4,Uid:4b8366cb-72c2-4662-9701-15f8f460c66b,Namespace:default,Attempt:0,} returns sandbox id \"7094b0b920c6cdd00e36fb6d5af982302dd8c8d5ace8ac1b7c105802b352b2e0\"" Jan 13 20:34:10.521605 containerd[1461]: time="2025-01-13T20:34:10.521246636Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:34:10.941494 kubelet[1852]: E0113 20:34:10.941374 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:11.942409 kubelet[1852]: E0113 20:34:11.942331 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:12.943533 kubelet[1852]: E0113 20:34:12.943326 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:13.944051 kubelet[1852]: E0113 20:34:13.943977 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:14.802723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1040361074.mount: Deactivated successfully. Jan 13 20:34:14.945503 kubelet[1852]: E0113 20:34:14.945438 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:15.946635 kubelet[1852]: E0113 20:34:15.946578 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:16.068221 containerd[1461]: time="2025-01-13T20:34:16.068110066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:16.069490 containerd[1461]: time="2025-01-13T20:34:16.069422841Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 20:34:16.070618 containerd[1461]: time="2025-01-13T20:34:16.070582597Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:16.074335 containerd[1461]: time="2025-01-13T20:34:16.074298745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:16.075727 containerd[1461]: time="2025-01-13T20:34:16.075677770Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 5.55440189s" Jan 13 20:34:16.075878 containerd[1461]: time="2025-01-13T20:34:16.075805510Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:34:16.079203 containerd[1461]: time="2025-01-13T20:34:16.079043902Z" level=info msg="CreateContainer within sandbox \"7094b0b920c6cdd00e36fb6d5af982302dd8c8d5ace8ac1b7c105802b352b2e0\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:34:16.098654 containerd[1461]: time="2025-01-13T20:34:16.098598817Z" level=info msg="CreateContainer within sandbox \"7094b0b920c6cdd00e36fb6d5af982302dd8c8d5ace8ac1b7c105802b352b2e0\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"0332342cf664115db44f39036bb666f2178cddd9a9cb9aace687ac20ecd952d4\"" Jan 13 20:34:16.099427 containerd[1461]: time="2025-01-13T20:34:16.099256744Z" level=info msg="StartContainer for \"0332342cf664115db44f39036bb666f2178cddd9a9cb9aace687ac20ecd952d4\"" Jan 13 20:34:16.136697 systemd[1]: Started cri-containerd-0332342cf664115db44f39036bb666f2178cddd9a9cb9aace687ac20ecd952d4.scope - libcontainer container 0332342cf664115db44f39036bb666f2178cddd9a9cb9aace687ac20ecd952d4. Jan 13 20:34:16.189760 containerd[1461]: time="2025-01-13T20:34:16.189708767Z" level=info msg="StartContainer for \"0332342cf664115db44f39036bb666f2178cddd9a9cb9aace687ac20ecd952d4\" returns successfully" Jan 13 20:34:16.259703 kubelet[1852]: I0113 20:34:16.259316 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-n44x4" podStartSLOduration=5.702992291 podStartE2EDuration="11.259285269s" podCreationTimestamp="2025-01-13 20:34:05 +0000 UTC" firstStartedPulling="2025-01-13 20:34:10.520928668 +0000 UTC m=+25.703499263" lastFinishedPulling="2025-01-13 20:34:16.077221636 +0000 UTC m=+31.259792241" observedRunningTime="2025-01-13 20:34:16.258328356 +0000 UTC m=+31.440898951" watchObservedRunningTime="2025-01-13 20:34:16.259285269 +0000 UTC m=+31.441855924" Jan 13 20:34:16.947317 kubelet[1852]: E0113 20:34:16.947232 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:17.947902 kubelet[1852]: E0113 20:34:17.947813 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:18.948200 kubelet[1852]: E0113 20:34:18.948045 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:19.949155 kubelet[1852]: E0113 20:34:19.949083 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:20.950286 kubelet[1852]: E0113 20:34:20.950201 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:21.951111 kubelet[1852]: E0113 20:34:21.951051 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:22.951310 kubelet[1852]: E0113 20:34:22.951228 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:23.952350 kubelet[1852]: E0113 20:34:23.952266 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:24.953710 kubelet[1852]: E0113 20:34:24.953593 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:25.915300 kubelet[1852]: E0113 20:34:25.915108 1852 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:25.954630 kubelet[1852]: E0113 20:34:25.954459 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:26.955656 kubelet[1852]: E0113 20:34:26.955488 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:27.956936 kubelet[1852]: E0113 20:34:27.956783 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:28.837234 systemd[1]: Created slice kubepods-besteffort-pod96f83f2a_e7e1_4aba_ae51_7b03c089b1c0.slice - libcontainer container kubepods-besteffort-pod96f83f2a_e7e1_4aba_ae51_7b03c089b1c0.slice. Jan 13 20:34:28.862487 kubelet[1852]: I0113 20:34:28.862327 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/96f83f2a-e7e1-4aba-ae51-7b03c089b1c0-data\") pod \"nfs-server-provisioner-0\" (UID: \"96f83f2a-e7e1-4aba-ae51-7b03c089b1c0\") " pod="default/nfs-server-provisioner-0" Jan 13 20:34:28.862487 kubelet[1852]: I0113 20:34:28.862427 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hslgh\" (UniqueName: \"kubernetes.io/projected/96f83f2a-e7e1-4aba-ae51-7b03c089b1c0-kube-api-access-hslgh\") pod \"nfs-server-provisioner-0\" (UID: \"96f83f2a-e7e1-4aba-ae51-7b03c089b1c0\") " pod="default/nfs-server-provisioner-0" Jan 13 20:34:28.957814 kubelet[1852]: E0113 20:34:28.957516 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:29.145588 containerd[1461]: time="2025-01-13T20:34:29.145012538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:96f83f2a-e7e1-4aba-ae51-7b03c089b1c0,Namespace:default,Attempt:0,}" Jan 13 20:34:29.244313 systemd-networkd[1380]: lxcdc0db49a38fd: Link UP Jan 13 20:34:29.263621 kernel: eth0: renamed from tmpbbec5 Jan 13 20:34:29.279100 systemd-networkd[1380]: lxcdc0db49a38fd: Gained carrier Jan 13 20:34:29.569426 containerd[1461]: time="2025-01-13T20:34:29.563418369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:29.569426 containerd[1461]: time="2025-01-13T20:34:29.563507052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:29.569426 containerd[1461]: time="2025-01-13T20:34:29.563528381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:29.569426 containerd[1461]: time="2025-01-13T20:34:29.563655393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:29.599733 systemd[1]: Started cri-containerd-bbec5dda553d25c461b6a31dc4bf8cb76f6722f137eeec996d3eab76f1dbeabd.scope - libcontainer container bbec5dda553d25c461b6a31dc4bf8cb76f6722f137eeec996d3eab76f1dbeabd. Jan 13 20:34:29.641551 containerd[1461]: time="2025-01-13T20:34:29.641488863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:96f83f2a-e7e1-4aba-ae51-7b03c089b1c0,Namespace:default,Attempt:0,} returns sandbox id \"bbec5dda553d25c461b6a31dc4bf8cb76f6722f137eeec996d3eab76f1dbeabd\"" Jan 13 20:34:29.643631 containerd[1461]: time="2025-01-13T20:34:29.643496619Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:34:29.958867 kubelet[1852]: E0113 20:34:29.958638 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:30.959558 kubelet[1852]: E0113 20:34:30.959028 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:31.251688 systemd-networkd[1380]: lxcdc0db49a38fd: Gained IPv6LL Jan 13 20:34:31.960017 kubelet[1852]: E0113 20:34:31.959751 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:32.708343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222689294.mount: Deactivated successfully. Jan 13 20:34:32.960879 kubelet[1852]: E0113 20:34:32.960762 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:33.961569 kubelet[1852]: E0113 20:34:33.961518 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:34.962572 kubelet[1852]: E0113 20:34:34.962486 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:35.151330 containerd[1461]: time="2025-01-13T20:34:35.151252380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:35.152889 containerd[1461]: time="2025-01-13T20:34:35.152583650Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 13 20:34:35.154096 containerd[1461]: time="2025-01-13T20:34:35.154032037Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:35.157726 containerd[1461]: time="2025-01-13T20:34:35.157646837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:35.158948 containerd[1461]: time="2025-01-13T20:34:35.158818207Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.515282539s" Jan 13 20:34:35.158948 containerd[1461]: time="2025-01-13T20:34:35.158850548Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 20:34:35.161353 containerd[1461]: time="2025-01-13T20:34:35.161311377Z" level=info msg="CreateContainer within sandbox \"bbec5dda553d25c461b6a31dc4bf8cb76f6722f137eeec996d3eab76f1dbeabd\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:34:35.175375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657494234.mount: Deactivated successfully. Jan 13 20:34:35.184999 containerd[1461]: time="2025-01-13T20:34:35.184934728Z" level=info msg="CreateContainer within sandbox \"bbec5dda553d25c461b6a31dc4bf8cb76f6722f137eeec996d3eab76f1dbeabd\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"879c68b82732c695402416b5206388764c38f75f5a34abbdfa67e127ab486d23\"" Jan 13 20:34:35.185743 containerd[1461]: time="2025-01-13T20:34:35.185694805Z" level=info msg="StartContainer for \"879c68b82732c695402416b5206388764c38f75f5a34abbdfa67e127ab486d23\"" Jan 13 20:34:35.219693 systemd[1]: Started cri-containerd-879c68b82732c695402416b5206388764c38f75f5a34abbdfa67e127ab486d23.scope - libcontainer container 879c68b82732c695402416b5206388764c38f75f5a34abbdfa67e127ab486d23. Jan 13 20:34:35.251406 containerd[1461]: time="2025-01-13T20:34:35.251347936Z" level=info msg="StartContainer for \"879c68b82732c695402416b5206388764c38f75f5a34abbdfa67e127ab486d23\" returns successfully" Jan 13 20:34:35.334367 kubelet[1852]: I0113 20:34:35.334300 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.817673719 podStartE2EDuration="7.334281294s" podCreationTimestamp="2025-01-13 20:34:28 +0000 UTC" firstStartedPulling="2025-01-13 20:34:29.643147496 +0000 UTC m=+44.825718102" lastFinishedPulling="2025-01-13 20:34:35.159755082 +0000 UTC m=+50.342325677" observedRunningTime="2025-01-13 20:34:35.332863695 +0000 UTC m=+50.515434300" watchObservedRunningTime="2025-01-13 20:34:35.334281294 +0000 UTC m=+50.516851889" Jan 13 20:34:35.963528 kubelet[1852]: E0113 20:34:35.963423 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:36.964672 kubelet[1852]: E0113 20:34:36.964495 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:37.965570 kubelet[1852]: E0113 20:34:37.965428 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:38.966311 kubelet[1852]: E0113 20:34:38.966213 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:39.967861 kubelet[1852]: E0113 20:34:39.967382 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:40.968110 kubelet[1852]: E0113 20:34:40.967978 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:41.968979 kubelet[1852]: E0113 20:34:41.968885 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:42.970045 kubelet[1852]: E0113 20:34:42.969928 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:43.970806 kubelet[1852]: E0113 20:34:43.970709 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:44.971873 kubelet[1852]: E0113 20:34:44.971806 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:45.381985 systemd[1]: Created slice kubepods-besteffort-pod8e5ee12c_65fd_4dc7_b923_e3de4aab30ea.slice - libcontainer container kubepods-besteffort-pod8e5ee12c_65fd_4dc7_b923_e3de4aab30ea.slice. Jan 13 20:34:45.481178 kubelet[1852]: I0113 20:34:45.480828 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54fnn\" (UniqueName: \"kubernetes.io/projected/8e5ee12c-65fd-4dc7-b923-e3de4aab30ea-kube-api-access-54fnn\") pod \"test-pod-1\" (UID: \"8e5ee12c-65fd-4dc7-b923-e3de4aab30ea\") " pod="default/test-pod-1" Jan 13 20:34:45.481178 kubelet[1852]: I0113 20:34:45.480923 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-69d389bd-09a2-4c8f-b070-2fe1f7e5c828\" (UniqueName: \"kubernetes.io/nfs/8e5ee12c-65fd-4dc7-b923-e3de4aab30ea-pvc-69d389bd-09a2-4c8f-b070-2fe1f7e5c828\") pod \"test-pod-1\" (UID: \"8e5ee12c-65fd-4dc7-b923-e3de4aab30ea\") " pod="default/test-pod-1" Jan 13 20:34:45.652882 kernel: FS-Cache: Loaded Jan 13 20:34:45.750354 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:34:45.750604 kernel: RPC: Registered udp transport module. Jan 13 20:34:45.750662 kernel: RPC: Registered tcp transport module. Jan 13 20:34:45.750729 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:34:45.750772 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:34:45.916299 kubelet[1852]: E0113 20:34:45.916003 1852 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:45.972090 kubelet[1852]: E0113 20:34:45.971990 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:46.044072 kernel: NFS: Registering the id_resolver key type Jan 13 20:34:46.044283 kernel: Key type id_resolver registered Jan 13 20:34:46.044333 kernel: Key type id_legacy registered Jan 13 20:34:46.140212 nfsidmap[3240]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 13 20:34:46.151958 nfsidmap[3243]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 13 20:34:46.288630 containerd[1461]: time="2025-01-13T20:34:46.288455715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8e5ee12c-65fd-4dc7-b923-e3de4aab30ea,Namespace:default,Attempt:0,}" Jan 13 20:34:46.363302 systemd-networkd[1380]: lxcf32a05c7b42a: Link UP Jan 13 20:34:46.372686 kernel: eth0: renamed from tmp652d5 Jan 13 20:34:46.380495 systemd-networkd[1380]: lxcf32a05c7b42a: Gained carrier Jan 13 20:34:46.623016 containerd[1461]: time="2025-01-13T20:34:46.622815346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:46.623016 containerd[1461]: time="2025-01-13T20:34:46.622876774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:46.623016 containerd[1461]: time="2025-01-13T20:34:46.622895553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:46.623431 containerd[1461]: time="2025-01-13T20:34:46.623335227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:46.644888 systemd[1]: run-containerd-runc-k8s.io-652d58de1bacf545115163067943a9407df0b60877c3670ed18b65126422e0a3-runc.GfYaXG.mount: Deactivated successfully. Jan 13 20:34:46.655701 systemd[1]: Started cri-containerd-652d58de1bacf545115163067943a9407df0b60877c3670ed18b65126422e0a3.scope - libcontainer container 652d58de1bacf545115163067943a9407df0b60877c3670ed18b65126422e0a3. Jan 13 20:34:46.696981 containerd[1461]: time="2025-01-13T20:34:46.696909862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8e5ee12c-65fd-4dc7-b923-e3de4aab30ea,Namespace:default,Attempt:0,} returns sandbox id \"652d58de1bacf545115163067943a9407df0b60877c3670ed18b65126422e0a3\"" Jan 13 20:34:46.699613 containerd[1461]: time="2025-01-13T20:34:46.699449837Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:34:46.973454 kubelet[1852]: E0113 20:34:46.972609 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:47.093272 containerd[1461]: time="2025-01-13T20:34:47.093060996Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:47.095206 containerd[1461]: time="2025-01-13T20:34:47.095059150Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:34:47.103486 containerd[1461]: time="2025-01-13T20:34:47.103265054Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 403.763149ms" Jan 13 20:34:47.103486 containerd[1461]: time="2025-01-13T20:34:47.103340701Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:34:47.109188 containerd[1461]: time="2025-01-13T20:34:47.109052894Z" level=info msg="CreateContainer within sandbox \"652d58de1bacf545115163067943a9407df0b60877c3670ed18b65126422e0a3\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:34:47.152293 containerd[1461]: time="2025-01-13T20:34:47.152165084Z" level=info msg="CreateContainer within sandbox \"652d58de1bacf545115163067943a9407df0b60877c3670ed18b65126422e0a3\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"abe9f6ca64befbcc070f7ce1127d7b4bffd8744f2013e67ea3d82f94a893636b\"" Jan 13 20:34:47.154621 containerd[1461]: time="2025-01-13T20:34:47.154386911Z" level=info msg="StartContainer for \"abe9f6ca64befbcc070f7ce1127d7b4bffd8744f2013e67ea3d82f94a893636b\"" Jan 13 20:34:47.204904 systemd[1]: Started cri-containerd-abe9f6ca64befbcc070f7ce1127d7b4bffd8744f2013e67ea3d82f94a893636b.scope - libcontainer container abe9f6ca64befbcc070f7ce1127d7b4bffd8744f2013e67ea3d82f94a893636b. Jan 13 20:34:47.244271 containerd[1461]: time="2025-01-13T20:34:47.244147793Z" level=info msg="StartContainer for \"abe9f6ca64befbcc070f7ce1127d7b4bffd8744f2013e67ea3d82f94a893636b\" returns successfully" Jan 13 20:34:47.372407 kubelet[1852]: I0113 20:34:47.372299 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.966422619 podStartE2EDuration="17.372263361s" podCreationTimestamp="2025-01-13 20:34:30 +0000 UTC" firstStartedPulling="2025-01-13 20:34:46.698675316 +0000 UTC m=+61.881245921" lastFinishedPulling="2025-01-13 20:34:47.104516018 +0000 UTC m=+62.287086663" observedRunningTime="2025-01-13 20:34:47.371457489 +0000 UTC m=+62.554028135" watchObservedRunningTime="2025-01-13 20:34:47.372263361 +0000 UTC m=+62.554834006" Jan 13 20:34:47.973329 kubelet[1852]: E0113 20:34:47.973221 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:48.278024 systemd-networkd[1380]: lxcf32a05c7b42a: Gained IPv6LL Jan 13 20:34:48.973980 kubelet[1852]: E0113 20:34:48.973854 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:49.975066 kubelet[1852]: E0113 20:34:49.974888 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:50.951500 containerd[1461]: time="2025-01-13T20:34:50.951391249Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:34:50.965050 containerd[1461]: time="2025-01-13T20:34:50.964947402Z" level=info msg="StopContainer for \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\" with timeout 2 (s)" Jan 13 20:34:50.965805 containerd[1461]: time="2025-01-13T20:34:50.965589031Z" level=info msg="Stop container \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\" with signal terminated" Jan 13 20:34:50.975234 kubelet[1852]: E0113 20:34:50.975139 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:50.981684 systemd-networkd[1380]: lxc_health: Link DOWN Jan 13 20:34:50.981699 systemd-networkd[1380]: lxc_health: Lost carrier Jan 13 20:34:51.005982 systemd[1]: cri-containerd-2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7.scope: Deactivated successfully. Jan 13 20:34:51.006308 systemd[1]: cri-containerd-2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7.scope: Consumed 8.493s CPU time. Jan 13 20:34:51.035235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7-rootfs.mount: Deactivated successfully. Jan 13 20:34:51.167468 kubelet[1852]: E0113 20:34:51.167324 1852 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:34:51.570570 containerd[1461]: time="2025-01-13T20:34:51.570378326Z" level=info msg="shim disconnected" id=2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7 namespace=k8s.io Jan 13 20:34:51.570570 containerd[1461]: time="2025-01-13T20:34:51.570484333Z" level=warning msg="cleaning up after shim disconnected" id=2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7 namespace=k8s.io Jan 13 20:34:51.570570 containerd[1461]: time="2025-01-13T20:34:51.570505517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:51.605041 containerd[1461]: time="2025-01-13T20:34:51.604792065Z" level=info msg="StopContainer for \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\" returns successfully" Jan 13 20:34:51.606385 containerd[1461]: time="2025-01-13T20:34:51.605948646Z" level=info msg="StopPodSandbox for \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\"" Jan 13 20:34:51.606385 containerd[1461]: time="2025-01-13T20:34:51.606017006Z" level=info msg="Container to stop \"090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:34:51.606385 containerd[1461]: time="2025-01-13T20:34:51.606096569Z" level=info msg="Container to stop \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:34:51.606385 containerd[1461]: time="2025-01-13T20:34:51.606119796Z" level=info msg="Container to stop \"8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:34:51.606385 containerd[1461]: time="2025-01-13T20:34:51.606142313Z" level=info msg="Container to stop \"45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:34:51.606385 containerd[1461]: time="2025-01-13T20:34:51.606167284Z" level=info msg="Container to stop \"46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:34:51.615036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3-shm.mount: Deactivated successfully. Jan 13 20:34:51.625396 systemd[1]: cri-containerd-1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3.scope: Deactivated successfully. Jan 13 20:34:51.675108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3-rootfs.mount: Deactivated successfully. Jan 13 20:34:51.684242 containerd[1461]: time="2025-01-13T20:34:51.684016858Z" level=info msg="shim disconnected" id=1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3 namespace=k8s.io Jan 13 20:34:51.684242 containerd[1461]: time="2025-01-13T20:34:51.684085178Z" level=warning msg="cleaning up after shim disconnected" id=1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3 namespace=k8s.io Jan 13 20:34:51.684242 containerd[1461]: time="2025-01-13T20:34:51.684107905Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:51.698470 containerd[1461]: time="2025-01-13T20:34:51.698387407Z" level=info msg="TearDown network for sandbox \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\" successfully" Jan 13 20:34:51.698470 containerd[1461]: time="2025-01-13T20:34:51.698437189Z" level=info msg="StopPodSandbox for \"1a412cb16de6a76d3db488aad77de3c871b1924aee1d1b7ac433b6d7d85b2be3\" returns successfully" Jan 13 20:34:51.832837 kubelet[1852]: I0113 20:34:51.831856 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv5mq\" (UniqueName: \"kubernetes.io/projected/0be17581-ef17-4d81-94ed-1be5d323db9d-kube-api-access-tv5mq\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.832837 kubelet[1852]: I0113 20:34:51.831956 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0be17581-ef17-4d81-94ed-1be5d323db9d-cilium-config-path\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.832837 kubelet[1852]: I0113 20:34:51.832005 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-host-proc-sys-kernel\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.832837 kubelet[1852]: I0113 20:34:51.832051 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-bpf-maps\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.832837 kubelet[1852]: I0113 20:34:51.832091 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-cilium-cgroup\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.832837 kubelet[1852]: I0113 20:34:51.832133 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-host-proc-sys-net\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.833408 kubelet[1852]: I0113 20:34:51.832172 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-xtables-lock\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.833408 kubelet[1852]: I0113 20:34:51.832215 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-lib-modules\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.833408 kubelet[1852]: I0113 20:34:51.832254 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-cni-path\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.833408 kubelet[1852]: I0113 20:34:51.832297 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0be17581-ef17-4d81-94ed-1be5d323db9d-hubble-tls\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.833408 kubelet[1852]: I0113 20:34:51.832334 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-cilium-run\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.833408 kubelet[1852]: I0113 20:34:51.832386 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0be17581-ef17-4d81-94ed-1be5d323db9d-clustermesh-secrets\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.835420 kubelet[1852]: I0113 20:34:51.832428 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-etc-cni-netd\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.835420 kubelet[1852]: I0113 20:34:51.832467 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-hostproc\") pod \"0be17581-ef17-4d81-94ed-1be5d323db9d\" (UID: \"0be17581-ef17-4d81-94ed-1be5d323db9d\") " Jan 13 20:34:51.835420 kubelet[1852]: I0113 20:34:51.832606 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-hostproc" (OuterVolumeSpecName: "hostproc") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.838805 kubelet[1852]: I0113 20:34:51.838088 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.838805 kubelet[1852]: I0113 20:34:51.838239 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be17581-ef17-4d81-94ed-1be5d323db9d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:34:51.838805 kubelet[1852]: I0113 20:34:51.838235 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.838805 kubelet[1852]: I0113 20:34:51.838334 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-cni-path" (OuterVolumeSpecName: "cni-path") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.838805 kubelet[1852]: I0113 20:34:51.838335 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.839189 kubelet[1852]: I0113 20:34:51.838426 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.839189 kubelet[1852]: I0113 20:34:51.838504 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.840290 kubelet[1852]: I0113 20:34:51.839534 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.840290 kubelet[1852]: I0113 20:34:51.840042 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.840290 kubelet[1852]: I0113 20:34:51.840124 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.850208 systemd[1]: var-lib-kubelet-pods-0be17581\x2def17\x2d4d81\x2d94ed\x2d1be5d323db9d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:34:51.854325 kubelet[1852]: I0113 20:34:51.853833 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be17581-ef17-4d81-94ed-1be5d323db9d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:34:51.854325 kubelet[1852]: I0113 20:34:51.854031 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be17581-ef17-4d81-94ed-1be5d323db9d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:34:51.854986 kubelet[1852]: I0113 20:34:51.854883 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be17581-ef17-4d81-94ed-1be5d323db9d-kube-api-access-tv5mq" (OuterVolumeSpecName: "kube-api-access-tv5mq") pod "0be17581-ef17-4d81-94ed-1be5d323db9d" (UID: "0be17581-ef17-4d81-94ed-1be5d323db9d"). InnerVolumeSpecName "kube-api-access-tv5mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:34:51.924711 systemd[1]: var-lib-kubelet-pods-0be17581\x2def17\x2d4d81\x2d94ed\x2d1be5d323db9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtv5mq.mount: Deactivated successfully. Jan 13 20:34:51.924959 systemd[1]: var-lib-kubelet-pods-0be17581\x2def17\x2d4d81\x2d94ed\x2d1be5d323db9d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:34:51.933648 kubelet[1852]: I0113 20:34:51.933252 1852 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0be17581-ef17-4d81-94ed-1be5d323db9d-clustermesh-secrets\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.933648 kubelet[1852]: I0113 20:34:51.933321 1852 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-etc-cni-netd\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.933648 kubelet[1852]: I0113 20:34:51.933348 1852 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-hostproc\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.933648 kubelet[1852]: I0113 20:34:51.933371 1852 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tv5mq\" (UniqueName: \"kubernetes.io/projected/0be17581-ef17-4d81-94ed-1be5d323db9d-kube-api-access-tv5mq\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.933648 kubelet[1852]: I0113 20:34:51.933396 1852 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0be17581-ef17-4d81-94ed-1be5d323db9d-cilium-config-path\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.933648 kubelet[1852]: I0113 20:34:51.933418 1852 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-host-proc-sys-kernel\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.933648 kubelet[1852]: I0113 20:34:51.933439 1852 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-bpf-maps\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.933648 kubelet[1852]: I0113 20:34:51.933463 1852 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-cilium-cgroup\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.934273 kubelet[1852]: I0113 20:34:51.933484 1852 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-host-proc-sys-net\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.934273 kubelet[1852]: I0113 20:34:51.933505 1852 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-xtables-lock\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.934273 kubelet[1852]: I0113 20:34:51.933527 1852 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-lib-modules\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.934273 kubelet[1852]: I0113 20:34:51.933603 1852 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-cni-path\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.934273 kubelet[1852]: I0113 20:34:51.933625 1852 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0be17581-ef17-4d81-94ed-1be5d323db9d-hubble-tls\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.934273 kubelet[1852]: I0113 20:34:51.933644 1852 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0be17581-ef17-4d81-94ed-1be5d323db9d-cilium-run\") on node \"172.24.4.95\" DevicePath \"\"" Jan 13 20:34:51.976326 kubelet[1852]: E0113 20:34:51.976196 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:52.121038 systemd[1]: Removed slice kubepods-burstable-pod0be17581_ef17_4d81_94ed_1be5d323db9d.slice - libcontainer container kubepods-burstable-pod0be17581_ef17_4d81_94ed_1be5d323db9d.slice. Jan 13 20:34:52.122961 systemd[1]: kubepods-burstable-pod0be17581_ef17_4d81_94ed_1be5d323db9d.slice: Consumed 8.582s CPU time. Jan 13 20:34:52.377426 kubelet[1852]: I0113 20:34:52.376968 1852 scope.go:117] "RemoveContainer" containerID="2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7" Jan 13 20:34:52.382776 containerd[1461]: time="2025-01-13T20:34:52.381896609Z" level=info msg="RemoveContainer for \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\"" Jan 13 20:34:52.390034 containerd[1461]: time="2025-01-13T20:34:52.389898723Z" level=info msg="RemoveContainer for \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\" returns successfully" Jan 13 20:34:52.391175 kubelet[1852]: I0113 20:34:52.390418 1852 scope.go:117] "RemoveContainer" containerID="090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb" Jan 13 20:34:52.395210 containerd[1461]: time="2025-01-13T20:34:52.394014411Z" level=info msg="RemoveContainer for \"090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb\"" Jan 13 20:34:52.402470 containerd[1461]: time="2025-01-13T20:34:52.402360518Z" level=info msg="RemoveContainer for \"090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb\" returns successfully" Jan 13 20:34:52.403874 kubelet[1852]: I0113 20:34:52.402966 1852 scope.go:117] "RemoveContainer" containerID="45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b" Jan 13 20:34:52.406211 containerd[1461]: time="2025-01-13T20:34:52.406131350Z" level=info msg="RemoveContainer for \"45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b\"" Jan 13 20:34:52.412403 containerd[1461]: time="2025-01-13T20:34:52.412269804Z" level=info msg="RemoveContainer for \"45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b\" returns successfully" Jan 13 20:34:52.412814 kubelet[1852]: I0113 20:34:52.412673 1852 scope.go:117] "RemoveContainer" containerID="8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb" Jan 13 20:34:52.416051 containerd[1461]: time="2025-01-13T20:34:52.415478255Z" level=info msg="RemoveContainer for \"8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb\"" Jan 13 20:34:52.424596 containerd[1461]: time="2025-01-13T20:34:52.424491918Z" level=info msg="RemoveContainer for \"8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb\" returns successfully" Jan 13 20:34:52.425582 kubelet[1852]: I0113 20:34:52.425289 1852 scope.go:117] "RemoveContainer" containerID="46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670" Jan 13 20:34:52.430582 containerd[1461]: time="2025-01-13T20:34:52.430368336Z" level=info msg="RemoveContainer for \"46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670\"" Jan 13 20:34:52.440701 containerd[1461]: time="2025-01-13T20:34:52.439032635Z" level=info msg="RemoveContainer for \"46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670\" returns successfully" Jan 13 20:34:52.441417 kubelet[1852]: I0113 20:34:52.441361 1852 scope.go:117] "RemoveContainer" containerID="2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7" Jan 13 20:34:52.442301 containerd[1461]: time="2025-01-13T20:34:52.442034564Z" level=error msg="ContainerStatus for \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\": not found" Jan 13 20:34:52.442623 kubelet[1852]: E0113 20:34:52.442482 1852 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\": not found" containerID="2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7" Jan 13 20:34:52.442843 kubelet[1852]: I0113 20:34:52.442598 1852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7"} err="failed to get container status \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b651335bb293e91edd8c51bd8a22fa445918c22c31a61adeab90f6850f516b7\": not found" Jan 13 20:34:52.442843 kubelet[1852]: I0113 20:34:52.442772 1852 scope.go:117] "RemoveContainer" containerID="090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb" Jan 13 20:34:52.443347 containerd[1461]: time="2025-01-13T20:34:52.443256173Z" level=error msg="ContainerStatus for \"090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb\": not found" Jan 13 20:34:52.443984 kubelet[1852]: E0113 20:34:52.443802 1852 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb\": not found" containerID="090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb" Jan 13 20:34:52.444172 kubelet[1852]: I0113 20:34:52.444017 1852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb"} err="failed to get container status \"090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb\": rpc error: code = NotFound desc = an error occurred when try to find container \"090853a99d36d2b6ef44cb41ce924f8e432d455102b3d2b8a6cc0199dfc75cdb\": not found" Jan 13 20:34:52.444172 kubelet[1852]: I0113 20:34:52.444119 1852 scope.go:117] "RemoveContainer" containerID="45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b" Jan 13 20:34:52.445233 containerd[1461]: time="2025-01-13T20:34:52.444892589Z" level=error msg="ContainerStatus for \"45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b\": not found" Jan 13 20:34:52.446231 kubelet[1852]: E0113 20:34:52.445607 1852 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b\": not found" containerID="45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b" Jan 13 20:34:52.446231 kubelet[1852]: I0113 20:34:52.445714 1852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b"} err="failed to get container status \"45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b\": rpc error: code = NotFound desc = an error occurred when try to find container \"45e8afe5f6f42c02acee6ca310c77d0a190997f1b8b8f05ad02b27e3a1e7294b\": not found" Jan 13 20:34:52.446231 kubelet[1852]: I0113 20:34:52.445794 1852 scope.go:117] "RemoveContainer" containerID="8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb" Jan 13 20:34:52.446589 containerd[1461]: time="2025-01-13T20:34:52.446421435Z" level=error msg="ContainerStatus for \"8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb\": not found" Jan 13 20:34:52.447065 kubelet[1852]: E0113 20:34:52.446828 1852 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb\": not found" containerID="8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb" Jan 13 20:34:52.447065 kubelet[1852]: I0113 20:34:52.446885 1852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb"} err="failed to get container status \"8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"8fb43f63a2fe08bf6cfba77aecc70945be0f61b0d3c15759d431c57528fd94cb\": not found" Jan 13 20:34:52.447065 kubelet[1852]: I0113 20:34:52.446926 1852 scope.go:117] "RemoveContainer" containerID="46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670" Jan 13 20:34:52.447803 containerd[1461]: time="2025-01-13T20:34:52.447655620Z" level=error msg="ContainerStatus for \"46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670\": not found" Jan 13 20:34:52.448031 kubelet[1852]: E0113 20:34:52.447971 1852 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670\": not found" containerID="46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670" Jan 13 20:34:52.448250 kubelet[1852]: I0113 20:34:52.448032 1852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670"} err="failed to get container status \"46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670\": rpc error: code = NotFound desc = an error occurred when try to find container \"46c0a7f346bb110c4dde8ba5bfae454260e817f4064aa4b67512308625b73670\": not found" Jan 13 20:34:52.977484 kubelet[1852]: E0113 20:34:52.977401 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:53.978837 kubelet[1852]: E0113 20:34:53.978657 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:54.113827 kubelet[1852]: I0113 20:34:54.112970 1852 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be17581-ef17-4d81-94ed-1be5d323db9d" path="/var/lib/kubelet/pods/0be17581-ef17-4d81-94ed-1be5d323db9d/volumes" Jan 13 20:34:54.979859 kubelet[1852]: E0113 20:34:54.979747 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:55.624064 kubelet[1852]: E0113 20:34:55.623834 1852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0be17581-ef17-4d81-94ed-1be5d323db9d" containerName="mount-cgroup" Jan 13 20:34:55.624064 kubelet[1852]: E0113 20:34:55.623899 1852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0be17581-ef17-4d81-94ed-1be5d323db9d" containerName="apply-sysctl-overwrites" Jan 13 20:34:55.624064 kubelet[1852]: E0113 20:34:55.623961 1852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0be17581-ef17-4d81-94ed-1be5d323db9d" containerName="cilium-agent" Jan 13 20:34:55.624064 kubelet[1852]: E0113 20:34:55.623980 1852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0be17581-ef17-4d81-94ed-1be5d323db9d" containerName="mount-bpf-fs" Jan 13 20:34:55.624064 kubelet[1852]: E0113 20:34:55.623994 1852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0be17581-ef17-4d81-94ed-1be5d323db9d" containerName="clean-cilium-state" Jan 13 20:34:55.624064 kubelet[1852]: I0113 20:34:55.624041 1852 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be17581-ef17-4d81-94ed-1be5d323db9d" containerName="cilium-agent" Jan 13 20:34:55.642902 systemd[1]: Created slice kubepods-besteffort-podaa976bd1_a3ec_48e9_b154_632815656a48.slice - libcontainer container kubepods-besteffort-podaa976bd1_a3ec_48e9_b154_632815656a48.slice. Jan 13 20:34:55.660688 systemd[1]: Created slice kubepods-burstable-pod9aa6e1a0_17fa_46be_b75d_7a5e3200943b.slice - libcontainer container kubepods-burstable-pod9aa6e1a0_17fa_46be_b75d_7a5e3200943b.slice. Jan 13 20:34:55.761332 kubelet[1852]: I0113 20:34:55.761015 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-etc-cni-netd\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.761332 kubelet[1852]: I0113 20:34:55.761118 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-cilium-config-path\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.761332 kubelet[1852]: I0113 20:34:55.761175 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-cilium-ipsec-secrets\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.761332 kubelet[1852]: I0113 20:34:55.761223 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-host-proc-sys-kernel\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.761332 kubelet[1852]: I0113 20:34:55.761288 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-cilium-run\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.761332 kubelet[1852]: I0113 20:34:55.761339 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-xtables-lock\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.762124 kubelet[1852]: I0113 20:34:55.761383 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7smp\" (UniqueName: \"kubernetes.io/projected/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-kube-api-access-p7smp\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.762124 kubelet[1852]: I0113 20:34:55.761427 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-lib-modules\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.762124 kubelet[1852]: I0113 20:34:55.761469 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-bpf-maps\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.762124 kubelet[1852]: I0113 20:34:55.761634 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-hubble-tls\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.762124 kubelet[1852]: I0113 20:34:55.761709 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wclw\" (UniqueName: \"kubernetes.io/projected/aa976bd1-a3ec-48e9-b154-632815656a48-kube-api-access-7wclw\") pod \"cilium-operator-5d85765b45-7vtsr\" (UID: \"aa976bd1-a3ec-48e9-b154-632815656a48\") " pod="kube-system/cilium-operator-5d85765b45-7vtsr" Jan 13 20:34:55.762439 kubelet[1852]: I0113 20:34:55.761775 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-hostproc\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.762439 kubelet[1852]: I0113 20:34:55.761819 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-cilium-cgroup\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.762439 kubelet[1852]: I0113 20:34:55.761858 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-cni-path\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.762439 kubelet[1852]: I0113 20:34:55.761916 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-clustermesh-secrets\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.762439 kubelet[1852]: I0113 20:34:55.761959 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9aa6e1a0-17fa-46be-b75d-7a5e3200943b-host-proc-sys-net\") pod \"cilium-6mmf7\" (UID: \"9aa6e1a0-17fa-46be-b75d-7a5e3200943b\") " pod="kube-system/cilium-6mmf7" Jan 13 20:34:55.762807 kubelet[1852]: I0113 20:34:55.762003 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa976bd1-a3ec-48e9-b154-632815656a48-cilium-config-path\") pod \"cilium-operator-5d85765b45-7vtsr\" (UID: \"aa976bd1-a3ec-48e9-b154-632815656a48\") " pod="kube-system/cilium-operator-5d85765b45-7vtsr" Jan 13 20:34:55.976780 containerd[1461]: time="2025-01-13T20:34:55.976650222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mmf7,Uid:9aa6e1a0-17fa-46be-b75d-7a5e3200943b,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:55.979964 kubelet[1852]: E0113 20:34:55.979904 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:56.001562 containerd[1461]: time="2025-01-13T20:34:56.001381962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:56.001562 containerd[1461]: time="2025-01-13T20:34:56.001435260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:56.001562 containerd[1461]: time="2025-01-13T20:34:56.001448877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:56.002001 containerd[1461]: time="2025-01-13T20:34:56.001565334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:56.022713 systemd[1]: Started cri-containerd-dd86148e9f4e57773e753f0df7d52d21a78fb2fa42cc31c47c6d4761de0ed1cb.scope - libcontainer container dd86148e9f4e57773e753f0df7d52d21a78fb2fa42cc31c47c6d4761de0ed1cb. Jan 13 20:34:56.046749 containerd[1461]: time="2025-01-13T20:34:56.046598506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mmf7,Uid:9aa6e1a0-17fa-46be-b75d-7a5e3200943b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd86148e9f4e57773e753f0df7d52d21a78fb2fa42cc31c47c6d4761de0ed1cb\"" Jan 13 20:34:56.050006 containerd[1461]: time="2025-01-13T20:34:56.049962038Z" level=info msg="CreateContainer within sandbox \"dd86148e9f4e57773e753f0df7d52d21a78fb2fa42cc31c47c6d4761de0ed1cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:34:56.067595 containerd[1461]: time="2025-01-13T20:34:56.067528424Z" level=info msg="CreateContainer within sandbox \"dd86148e9f4e57773e753f0df7d52d21a78fb2fa42cc31c47c6d4761de0ed1cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e27d02ffd89616d47e34eef897785f6bb682c60faf23884ae8a08e2d4ac9ec26\"" Jan 13 20:34:56.068237 containerd[1461]: time="2025-01-13T20:34:56.068191819Z" level=info msg="StartContainer for \"e27d02ffd89616d47e34eef897785f6bb682c60faf23884ae8a08e2d4ac9ec26\"" Jan 13 20:34:56.097692 systemd[1]: Started cri-containerd-e27d02ffd89616d47e34eef897785f6bb682c60faf23884ae8a08e2d4ac9ec26.scope - libcontainer container e27d02ffd89616d47e34eef897785f6bb682c60faf23884ae8a08e2d4ac9ec26. Jan 13 20:34:56.134587 containerd[1461]: time="2025-01-13T20:34:56.134427433Z" level=info msg="StartContainer for \"e27d02ffd89616d47e34eef897785f6bb682c60faf23884ae8a08e2d4ac9ec26\" returns successfully" Jan 13 20:34:56.138746 systemd[1]: cri-containerd-e27d02ffd89616d47e34eef897785f6bb682c60faf23884ae8a08e2d4ac9ec26.scope: Deactivated successfully. Jan 13 20:34:56.168603 kubelet[1852]: E0113 20:34:56.168463 1852 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:34:56.188922 containerd[1461]: time="2025-01-13T20:34:56.188700655Z" level=info msg="shim disconnected" id=e27d02ffd89616d47e34eef897785f6bb682c60faf23884ae8a08e2d4ac9ec26 namespace=k8s.io Jan 13 20:34:56.188922 containerd[1461]: time="2025-01-13T20:34:56.188891452Z" level=warning msg="cleaning up after shim disconnected" id=e27d02ffd89616d47e34eef897785f6bb682c60faf23884ae8a08e2d4ac9ec26 namespace=k8s.io Jan 13 20:34:56.188922 containerd[1461]: time="2025-01-13T20:34:56.188923206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:56.254089 containerd[1461]: time="2025-01-13T20:34:56.253262374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7vtsr,Uid:aa976bd1-a3ec-48e9-b154-632815656a48,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:56.306867 containerd[1461]: time="2025-01-13T20:34:56.305655093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:56.306867 containerd[1461]: time="2025-01-13T20:34:56.305809275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:56.306867 containerd[1461]: time="2025-01-13T20:34:56.305867904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:56.306867 containerd[1461]: time="2025-01-13T20:34:56.306055464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:56.338902 systemd[1]: Started cri-containerd-d4545cb2cc0cc3ea415fd9c28c54c8c24e2e76e34f81a089618f24cab9b0ee03.scope - libcontainer container d4545cb2cc0cc3ea415fd9c28c54c8c24e2e76e34f81a089618f24cab9b0ee03. Jan 13 20:34:56.382611 containerd[1461]: time="2025-01-13T20:34:56.382565255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7vtsr,Uid:aa976bd1-a3ec-48e9-b154-632815656a48,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4545cb2cc0cc3ea415fd9c28c54c8c24e2e76e34f81a089618f24cab9b0ee03\"" Jan 13 20:34:56.384338 containerd[1461]: time="2025-01-13T20:34:56.384308941Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:34:56.396441 containerd[1461]: time="2025-01-13T20:34:56.395666356Z" level=info msg="CreateContainer within sandbox \"dd86148e9f4e57773e753f0df7d52d21a78fb2fa42cc31c47c6d4761de0ed1cb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:34:56.414621 containerd[1461]: time="2025-01-13T20:34:56.414510333Z" level=info msg="CreateContainer within sandbox \"dd86148e9f4e57773e753f0df7d52d21a78fb2fa42cc31c47c6d4761de0ed1cb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"39985d090f4ec58bd287068e6226b7d0ca3543f06889d74ed0f873d2028f3c2b\"" Jan 13 20:34:56.415180 containerd[1461]: time="2025-01-13T20:34:56.415141032Z" level=info msg="StartContainer for \"39985d090f4ec58bd287068e6226b7d0ca3543f06889d74ed0f873d2028f3c2b\"" Jan 13 20:34:56.446684 systemd[1]: Started cri-containerd-39985d090f4ec58bd287068e6226b7d0ca3543f06889d74ed0f873d2028f3c2b.scope - libcontainer container 39985d090f4ec58bd287068e6226b7d0ca3543f06889d74ed0f873d2028f3c2b. Jan 13 20:34:56.478963 containerd[1461]: time="2025-01-13T20:34:56.478921717Z" level=info msg="StartContainer for \"39985d090f4ec58bd287068e6226b7d0ca3543f06889d74ed0f873d2028f3c2b\" returns successfully" Jan 13 20:34:56.484377 systemd[1]: cri-containerd-39985d090f4ec58bd287068e6226b7d0ca3543f06889d74ed0f873d2028f3c2b.scope: Deactivated successfully. Jan 13 20:34:56.514641 containerd[1461]: time="2025-01-13T20:34:56.514278986Z" level=info msg="shim disconnected" id=39985d090f4ec58bd287068e6226b7d0ca3543f06889d74ed0f873d2028f3c2b namespace=k8s.io Jan 13 20:34:56.514641 containerd[1461]: time="2025-01-13T20:34:56.514389290Z" level=warning msg="cleaning up after shim disconnected" id=39985d090f4ec58bd287068e6226b7d0ca3543f06889d74ed0f873d2028f3c2b namespace=k8s.io Jan 13 20:34:56.514641 containerd[1461]: time="2025-01-13T20:34:56.514400231Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:56.528694 containerd[1461]: time="2025-01-13T20:34:56.528448583Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:34:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:34:56.982760 kubelet[1852]: E0113 20:34:56.982670 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:57.403589 containerd[1461]: time="2025-01-13T20:34:57.403487595Z" level=info msg="CreateContainer within sandbox \"dd86148e9f4e57773e753f0df7d52d21a78fb2fa42cc31c47c6d4761de0ed1cb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:34:57.459065 containerd[1461]: time="2025-01-13T20:34:57.458992346Z" level=info msg="CreateContainer within sandbox \"dd86148e9f4e57773e753f0df7d52d21a78fb2fa42cc31c47c6d4761de0ed1cb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0b7f70fccff79756ac3a2e2a5fb32c0a0598db26b73b34d509936cb0c0fd479f\"" Jan 13 20:34:57.460180 containerd[1461]: time="2025-01-13T20:34:57.460131752Z" level=info msg="StartContainer for \"0b7f70fccff79756ac3a2e2a5fb32c0a0598db26b73b34d509936cb0c0fd479f\"" Jan 13 20:34:57.543727 systemd[1]: Started cri-containerd-0b7f70fccff79756ac3a2e2a5fb32c0a0598db26b73b34d509936cb0c0fd479f.scope - libcontainer container 0b7f70fccff79756ac3a2e2a5fb32c0a0598db26b73b34d509936cb0c0fd479f. Jan 13 20:34:57.582889 systemd[1]: cri-containerd-0b7f70fccff79756ac3a2e2a5fb32c0a0598db26b73b34d509936cb0c0fd479f.scope: Deactivated successfully. Jan 13 20:34:57.583139 containerd[1461]: time="2025-01-13T20:34:57.583109102Z" level=info msg="StartContainer for \"0b7f70fccff79756ac3a2e2a5fb32c0a0598db26b73b34d509936cb0c0fd479f\" returns successfully" Jan 13 20:34:57.619262 containerd[1461]: time="2025-01-13T20:34:57.619075356Z" level=info msg="shim disconnected" id=0b7f70fccff79756ac3a2e2a5fb32c0a0598db26b73b34d509936cb0c0fd479f namespace=k8s.io Jan 13 20:34:57.619473 containerd[1461]: time="2025-01-13T20:34:57.619455386Z" level=warning msg="cleaning up after shim disconnected" id=0b7f70fccff79756ac3a2e2a5fb32c0a0598db26b73b34d509936cb0c0fd479f namespace=k8s.io Jan 13 20:34:57.619704 containerd[1461]: time="2025-01-13T20:34:57.619687696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:57.882997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b7f70fccff79756ac3a2e2a5fb32c0a0598db26b73b34d509936cb0c0fd479f-rootfs.mount: Deactivated successfully. Jan 13 20:34:57.986754 kubelet[1852]: E0113 20:34:57.986605 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:58.162371 kubelet[1852]: I0113 20:34:58.162071 1852 setters.go:600] "Node became not ready" node="172.24.4.95" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:34:58Z","lastTransitionTime":"2025-01-13T20:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:34:58.431891 containerd[1461]: time="2025-01-13T20:34:58.430612863Z" level=info msg="CreateContainer within sandbox \"dd86148e9f4e57773e753f0df7d52d21a78fb2fa42cc31c47c6d4761de0ed1cb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:34:58.483571 containerd[1461]: time="2025-01-13T20:34:58.482036418Z" level=info msg="CreateContainer within sandbox \"dd86148e9f4e57773e753f0df7d52d21a78fb2fa42cc31c47c6d4761de0ed1cb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"856f434896a26fa0b7b2c4c5a535429f7e0fdef46ede3a91ec48ae6dcd1c001c\"" Jan 13 20:34:58.487409 containerd[1461]: time="2025-01-13T20:34:58.485686355Z" level=info msg="StartContainer for \"856f434896a26fa0b7b2c4c5a535429f7e0fdef46ede3a91ec48ae6dcd1c001c\"" Jan 13 20:34:58.501976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286554146.mount: Deactivated successfully. Jan 13 20:34:58.532951 systemd[1]: Started cri-containerd-856f434896a26fa0b7b2c4c5a535429f7e0fdef46ede3a91ec48ae6dcd1c001c.scope - libcontainer container 856f434896a26fa0b7b2c4c5a535429f7e0fdef46ede3a91ec48ae6dcd1c001c. Jan 13 20:34:58.567920 systemd[1]: cri-containerd-856f434896a26fa0b7b2c4c5a535429f7e0fdef46ede3a91ec48ae6dcd1c001c.scope: Deactivated successfully. Jan 13 20:34:58.572404 containerd[1461]: time="2025-01-13T20:34:58.572334984Z" level=info msg="StartContainer for \"856f434896a26fa0b7b2c4c5a535429f7e0fdef46ede3a91ec48ae6dcd1c001c\" returns successfully" Jan 13 20:34:58.613979 containerd[1461]: time="2025-01-13T20:34:58.613913751Z" level=info msg="shim disconnected" id=856f434896a26fa0b7b2c4c5a535429f7e0fdef46ede3a91ec48ae6dcd1c001c namespace=k8s.io Jan 13 20:34:58.613979 containerd[1461]: time="2025-01-13T20:34:58.613969834Z" level=warning msg="cleaning up after shim disconnected" id=856f434896a26fa0b7b2c4c5a535429f7e0fdef46ede3a91ec48ae6dcd1c001c namespace=k8s.io Jan 13 20:34:58.613979 containerd[1461]: time="2025-01-13T20:34:58.613980035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:58.631155 containerd[1461]: time="2025-01-13T20:34:58.631115348Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:34:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:34:58.989801 kubelet[1852]: E0113 20:34:58.989681 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:34:59.095014 containerd[1461]: time="2025-01-13T20:34:59.094925782Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:59.096588 containerd[1461]: time="2025-01-13T20:34:59.096146716Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907217" Jan 13 20:34:59.098045 containerd[1461]: time="2025-01-13T20:34:59.097991849Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:59.100500 containerd[1461]: time="2025-01-13T20:34:59.100407494Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.716052339s" Jan 13 20:34:59.100593 containerd[1461]: time="2025-01-13T20:34:59.100511534Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:34:59.105466 containerd[1461]: time="2025-01-13T20:34:59.104411013Z" level=info msg="CreateContainer within sandbox \"d4545cb2cc0cc3ea415fd9c28c54c8c24e2e76e34f81a089618f24cab9b0ee03\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:34:59.127963 containerd[1461]: time="2025-01-13T20:34:59.127818720Z" level=info msg="CreateContainer within sandbox \"d4545cb2cc0cc3ea415fd9c28c54c8c24e2e76e34f81a089618f24cab9b0ee03\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0eed0aa5a0a89b2166203b3925d4bda0af3eed4fb9a7388b101fce906dd9d6ac\"" Jan 13 20:34:59.128929 containerd[1461]: time="2025-01-13T20:34:59.128823748Z" level=info msg="StartContainer for \"0eed0aa5a0a89b2166203b3925d4bda0af3eed4fb9a7388b101fce906dd9d6ac\"" Jan 13 20:34:59.175697 systemd[1]: Started cri-containerd-0eed0aa5a0a89b2166203b3925d4bda0af3eed4fb9a7388b101fce906dd9d6ac.scope - libcontainer container 0eed0aa5a0a89b2166203b3925d4bda0af3eed4fb9a7388b101fce906dd9d6ac. Jan 13 20:34:59.203302 containerd[1461]: time="2025-01-13T20:34:59.203227894Z" level=info msg="StartContainer for \"0eed0aa5a0a89b2166203b3925d4bda0af3eed4fb9a7388b101fce906dd9d6ac\" returns successfully" Jan 13 20:34:59.432780 containerd[1461]: time="2025-01-13T20:34:59.432504773Z" level=info msg="CreateContainer within sandbox \"dd86148e9f4e57773e753f0df7d52d21a78fb2fa42cc31c47c6d4761de0ed1cb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:34:59.474573 containerd[1461]: time="2025-01-13T20:34:59.470025529Z" level=info msg="CreateContainer within sandbox \"dd86148e9f4e57773e753f0df7d52d21a78fb2fa42cc31c47c6d4761de0ed1cb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e2e832f15d7c59a96f2f7e5158df47950230f349982ef87702981e6958a3b601\"" Jan 13 20:34:59.474573 containerd[1461]: time="2025-01-13T20:34:59.470753127Z" level=info msg="StartContainer for \"e2e832f15d7c59a96f2f7e5158df47950230f349982ef87702981e6958a3b601\"" Jan 13 20:34:59.503861 kubelet[1852]: I0113 20:34:59.503503 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-7vtsr" podStartSLOduration=1.785827466 podStartE2EDuration="4.503483286s" podCreationTimestamp="2025-01-13 20:34:55 +0000 UTC" firstStartedPulling="2025-01-13 20:34:56.383832545 +0000 UTC m=+71.566403140" lastFinishedPulling="2025-01-13 20:34:59.101488325 +0000 UTC m=+74.284058960" observedRunningTime="2025-01-13 20:34:59.456044686 +0000 UTC m=+74.638615311" watchObservedRunningTime="2025-01-13 20:34:59.503483286 +0000 UTC m=+74.686053891" Jan 13 20:34:59.508706 systemd[1]: Started cri-containerd-e2e832f15d7c59a96f2f7e5158df47950230f349982ef87702981e6958a3b601.scope - libcontainer container e2e832f15d7c59a96f2f7e5158df47950230f349982ef87702981e6958a3b601. Jan 13 20:34:59.542325 containerd[1461]: time="2025-01-13T20:34:59.542221509Z" level=info msg="StartContainer for \"e2e832f15d7c59a96f2f7e5158df47950230f349982ef87702981e6958a3b601\" returns successfully" Jan 13 20:34:59.884249 systemd[1]: run-containerd-runc-k8s.io-0eed0aa5a0a89b2166203b3925d4bda0af3eed4fb9a7388b101fce906dd9d6ac-runc.IiS2BW.mount: Deactivated successfully. Jan 13 20:34:59.926609 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:34:59.988585 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 13 20:34:59.989893 kubelet[1852]: E0113 20:34:59.989845 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:00.476909 kubelet[1852]: I0113 20:35:00.476616 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6mmf7" podStartSLOduration=5.476571274 podStartE2EDuration="5.476571274s" podCreationTimestamp="2025-01-13 20:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:00.476260529 +0000 UTC m=+75.658831194" watchObservedRunningTime="2025-01-13 20:35:00.476571274 +0000 UTC m=+75.659141939" Jan 13 20:35:00.990110 kubelet[1852]: E0113 20:35:00.990020 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:01.990838 kubelet[1852]: E0113 20:35:01.990784 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:02.991602 kubelet[1852]: E0113 20:35:02.991473 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:03.686151 systemd-networkd[1380]: lxc_health: Link UP Jan 13 20:35:03.695001 systemd-networkd[1380]: lxc_health: Gained carrier Jan 13 20:35:03.992532 kubelet[1852]: E0113 20:35:03.992400 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:04.274752 systemd[1]: run-containerd-runc-k8s.io-e2e832f15d7c59a96f2f7e5158df47950230f349982ef87702981e6958a3b601-runc.X2Qwzb.mount: Deactivated successfully. Jan 13 20:35:04.993820 kubelet[1852]: E0113 20:35:04.993746 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:05.235999 systemd-networkd[1380]: lxc_health: Gained IPv6LL Jan 13 20:35:05.915813 kubelet[1852]: E0113 20:35:05.915680 1852 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:05.994834 kubelet[1852]: E0113 20:35:05.994730 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:06.465634 systemd[1]: run-containerd-runc-k8s.io-e2e832f15d7c59a96f2f7e5158df47950230f349982ef87702981e6958a3b601-runc.jZv68r.mount: Deactivated successfully. Jan 13 20:35:06.995032 kubelet[1852]: E0113 20:35:06.994949 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:07.995907 kubelet[1852]: E0113 20:35:07.995815 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:08.640348 systemd[1]: run-containerd-runc-k8s.io-e2e832f15d7c59a96f2f7e5158df47950230f349982ef87702981e6958a3b601-runc.6anygJ.mount: Deactivated successfully. Jan 13 20:35:08.997158 kubelet[1852]: E0113 20:35:08.996936 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:09.997648 kubelet[1852]: E0113 20:35:09.997512 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:10.998254 kubelet[1852]: E0113 20:35:10.998143 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:11.998373 kubelet[1852]: E0113 20:35:11.998286 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:12.998585 kubelet[1852]: E0113 20:35:12.998468 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:13.999507 kubelet[1852]: E0113 20:35:13.999412 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:15.000476 kubelet[1852]: E0113 20:35:15.000360 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:16.000863 kubelet[1852]: E0113 20:35:16.000778 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:17.002088 kubelet[1852]: E0113 20:35:17.002006 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:18.003329 kubelet[1852]: E0113 20:35:18.003241 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:19.003821 kubelet[1852]: E0113 20:35:19.003727 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:20.004028 kubelet[1852]: E0113 20:35:20.003942 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:21.004808 kubelet[1852]: E0113 20:35:21.004691 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:22.005369 kubelet[1852]: E0113 20:35:22.005241 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:23.005951 kubelet[1852]: E0113 20:35:23.005876 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:24.006447 kubelet[1852]: E0113 20:35:24.006371 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:25.007409 kubelet[1852]: E0113 20:35:25.007366 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:25.914894 kubelet[1852]: E0113 20:35:25.914808 1852 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:25.961915 kubelet[1852]: E0113 20:35:25.961440 1852 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 172.24.4.206:49650->172.24.4.231:2379: read: connection timed out" Jan 13 20:35:26.008095 kubelet[1852]: E0113 20:35:26.007984 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:27.008317 kubelet[1852]: E0113 20:35:27.008227 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:28.009334 kubelet[1852]: E0113 20:35:28.009233 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:28.598124 kubelet[1852]: E0113 20:35:28.597860 1852 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:35:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:35:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:35:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:35:18Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":71035896},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\\\",\\\"registry.k8s.io/kube-proxy:v1.31.4\\\"],\\\"sizeBytes\\\":30229262},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"172.24.4.95\": Patch \"https://172.24.4.206:6443/api/v1/nodes/172.24.4.95/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:35:28.839389 kubelet[1852]: E0113 20:35:28.839304 1852 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.24.4.95\": rpc error: code = Unavailable desc = error reading from server: read tcp 172.24.4.206:49552->172.24.4.231:2379: read: connection timed out" Jan 13 20:35:29.009799 kubelet[1852]: E0113 20:35:29.009577 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:30.010404 kubelet[1852]: E0113 20:35:30.010322 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:31.010676 kubelet[1852]: E0113 20:35:31.010574 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:35:32.011594 kubelet[1852]: E0113 20:35:32.011473 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"