Jan 13 21:06:27.963821 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 21:06:27.963845 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:06:27.963855 kernel: BIOS-provided physical RAM map: Jan 13 21:06:27.963863 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:06:27.963870 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:06:27.963879 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:06:27.963888 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 13 21:06:27.963895 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 13 21:06:27.963902 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:06:27.963910 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:06:27.963917 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 13 21:06:27.963924 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:06:27.963931 kernel: NX (Execute Disable) protection: active Jan 13 21:06:27.963941 kernel: APIC: Static calls initialized Jan 13 21:06:27.963950 kernel: SMBIOS 3.0.0 present. Jan 13 21:06:27.963958 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 13 21:06:27.963965 kernel: Hypervisor detected: KVM Jan 13 21:06:27.963973 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:06:27.963981 kernel: kvm-clock: using sched offset of 3596840909 cycles Jan 13 21:06:27.964015 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:06:27.964023 kernel: tsc: Detected 1996.249 MHz processor Jan 13 21:06:27.964048 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:06:27.964056 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:06:27.964064 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 13 21:06:27.964072 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:06:27.964080 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:06:27.964088 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 13 21:06:27.964095 kernel: ACPI: Early table checksum verification disabled Jan 13 21:06:27.964105 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 13 21:06:27.964113 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:27.964121 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:27.964129 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:27.964137 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 13 21:06:27.964145 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:27.964152 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:27.964160 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 13 21:06:27.964170 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 13 21:06:27.964178 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 13 21:06:27.964185 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 13 21:06:27.964193 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 13 21:06:27.964204 kernel: No NUMA configuration found Jan 13 21:06:27.964212 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 13 21:06:27.964220 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 13 21:06:27.964230 kernel: Zone ranges: Jan 13 21:06:27.964239 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:06:27.964247 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:06:27.964255 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 13 21:06:27.964263 kernel: Movable zone start for each node Jan 13 21:06:27.964271 kernel: Early memory node ranges Jan 13 21:06:27.964279 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:06:27.964287 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 13 21:06:27.964298 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 13 21:06:27.964306 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 13 21:06:27.964314 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:06:27.964322 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:06:27.964330 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 13 21:06:27.964339 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:06:27.964347 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:06:27.964355 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:06:27.964363 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:06:27.964373 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:06:27.964381 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:06:27.964389 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:06:27.964397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:06:27.964405 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:06:27.964413 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:06:27.964421 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:06:27.964430 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 13 21:06:27.964438 kernel: Booting paravirtualized kernel on KVM Jan 13 21:06:27.964448 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:06:27.964456 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:06:27.964464 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:06:27.964473 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:06:27.964481 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:06:27.964489 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 13 21:06:27.964498 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:06:27.964507 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:06:27.964517 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:06:27.964526 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:06:27.964534 kernel: Fallback order for Node 0: 0 Jan 13 21:06:27.964542 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 13 21:06:27.964550 kernel: Policy zone: Normal Jan 13 21:06:27.964559 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:06:27.964567 kernel: software IO TLB: area num 2. Jan 13 21:06:27.964575 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 227308K reserved, 0K cma-reserved) Jan 13 21:06:27.964584 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:06:27.964594 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 21:06:27.964602 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:06:27.964610 kernel: Dynamic Preempt: voluntary Jan 13 21:06:27.964618 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:06:27.964627 kernel: rcu: RCU event tracing is enabled. Jan 13 21:06:27.964636 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:06:27.964645 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:06:27.964653 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:06:27.964661 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:06:27.964672 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:06:27.964680 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:06:27.964688 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:06:27.964696 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:06:27.964704 kernel: Console: colour VGA+ 80x25 Jan 13 21:06:27.964713 kernel: printk: console [tty0] enabled Jan 13 21:06:27.964721 kernel: printk: console [ttyS0] enabled Jan 13 21:06:27.964729 kernel: ACPI: Core revision 20230628 Jan 13 21:06:27.964737 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:06:27.964745 kernel: x2apic enabled Jan 13 21:06:27.964755 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:06:27.964764 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:06:27.964772 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:06:27.964781 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 13 21:06:27.964789 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 21:06:27.964820 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 21:06:27.964835 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:06:27.964848 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:06:27.964865 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:06:27.964885 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:06:27.964901 kernel: Speculative Store Bypass: Vulnerable Jan 13 21:06:27.964920 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 13 21:06:27.964938 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:06:27.964964 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:06:27.965015 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:06:27.965046 kernel: landlock: Up and running. Jan 13 21:06:27.965054 kernel: SELinux: Initializing. Jan 13 21:06:27.965063 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:06:27.965072 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:06:27.965081 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 13 21:06:27.965093 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:06:27.965102 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:06:27.965111 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:06:27.965119 kernel: Performance Events: AMD PMU driver. Jan 13 21:06:27.965128 kernel: ... version: 0 Jan 13 21:06:27.965139 kernel: ... bit width: 48 Jan 13 21:06:27.965148 kernel: ... generic registers: 4 Jan 13 21:06:27.965156 kernel: ... value mask: 0000ffffffffffff Jan 13 21:06:27.965165 kernel: ... max period: 00007fffffffffff Jan 13 21:06:27.965174 kernel: ... fixed-purpose events: 0 Jan 13 21:06:27.965182 kernel: ... event mask: 000000000000000f Jan 13 21:06:27.965191 kernel: signal: max sigframe size: 1440 Jan 13 21:06:27.965200 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:06:27.965209 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:06:27.965219 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:06:27.965227 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:06:27.965236 kernel: .... node #0, CPUs: #1 Jan 13 21:06:27.965245 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:06:27.965253 kernel: smpboot: Max logical packages: 2 Jan 13 21:06:27.965262 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 13 21:06:27.965271 kernel: devtmpfs: initialized Jan 13 21:06:27.965280 kernel: x86/mm: Memory block size: 128MB Jan 13 21:06:27.965289 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:06:27.965299 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:06:27.965308 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:06:27.965316 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:06:27.965325 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:06:27.965334 kernel: audit: type=2000 audit(1736802386.744:1): state=initialized audit_enabled=0 res=1 Jan 13 21:06:27.965343 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:06:27.965351 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:06:27.965360 kernel: cpuidle: using governor menu Jan 13 21:06:27.965369 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:06:27.965379 kernel: dca service started, version 1.12.1 Jan 13 21:06:27.965387 kernel: PCI: Using configuration type 1 for base access Jan 13 21:06:27.965396 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:06:27.965405 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:06:27.965414 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:06:27.965422 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:06:27.965431 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:06:27.965440 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:06:27.965448 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:06:27.965459 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:06:27.965468 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:06:27.965476 kernel: ACPI: Interpreter enabled Jan 13 21:06:27.965485 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:06:27.965494 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:06:27.965503 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:06:27.965513 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:06:27.965522 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 13 21:06:27.965530 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:06:27.965688 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:06:27.965799 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:06:27.965891 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:06:27.965904 kernel: acpiphp: Slot [3] registered Jan 13 21:06:27.965913 kernel: acpiphp: Slot [4] registered Jan 13 21:06:27.965922 kernel: acpiphp: Slot [5] registered Jan 13 21:06:27.965930 kernel: acpiphp: Slot [6] registered Jan 13 21:06:27.965939 kernel: acpiphp: Slot [7] registered Jan 13 21:06:27.965951 kernel: acpiphp: Slot [8] registered Jan 13 21:06:27.965959 kernel: acpiphp: Slot [9] registered Jan 13 21:06:27.965968 kernel: acpiphp: Slot [10] registered Jan 13 21:06:27.965976 kernel: acpiphp: Slot [11] registered Jan 13 21:06:27.966002 kernel: acpiphp: Slot [12] registered Jan 13 21:06:27.966011 kernel: acpiphp: Slot [13] registered Jan 13 21:06:27.966020 kernel: acpiphp: Slot [14] registered Jan 13 21:06:27.966028 kernel: acpiphp: Slot [15] registered Jan 13 21:06:27.966037 kernel: acpiphp: Slot [16] registered Jan 13 21:06:27.966048 kernel: acpiphp: Slot [17] registered Jan 13 21:06:27.966056 kernel: acpiphp: Slot [18] registered Jan 13 21:06:27.966065 kernel: acpiphp: Slot [19] registered Jan 13 21:06:27.966073 kernel: acpiphp: Slot [20] registered Jan 13 21:06:27.966081 kernel: acpiphp: Slot [21] registered Jan 13 21:06:27.966090 kernel: acpiphp: Slot [22] registered Jan 13 21:06:27.966099 kernel: acpiphp: Slot [23] registered Jan 13 21:06:27.966107 kernel: acpiphp: Slot [24] registered Jan 13 21:06:27.966116 kernel: acpiphp: Slot [25] registered Jan 13 21:06:27.966124 kernel: acpiphp: Slot [26] registered Jan 13 21:06:27.966135 kernel: acpiphp: Slot [27] registered Jan 13 21:06:27.966143 kernel: acpiphp: Slot [28] registered Jan 13 21:06:27.966152 kernel: acpiphp: Slot [29] registered Jan 13 21:06:27.966161 kernel: acpiphp: Slot [30] registered Jan 13 21:06:27.966170 kernel: acpiphp: Slot [31] registered Jan 13 21:06:27.966178 kernel: PCI host bridge to bus 0000:00 Jan 13 21:06:27.966281 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:06:27.966366 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:06:27.966452 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:06:27.966533 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:06:27.966613 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 13 21:06:27.966695 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:06:27.966801 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:06:27.966902 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 21:06:27.967028 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 13 21:06:27.967146 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 13 21:06:27.967240 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 21:06:27.967332 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 21:06:27.967426 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 21:06:27.967519 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 21:06:27.967625 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:06:27.967767 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 13 21:06:27.967863 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 13 21:06:27.967965 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 13 21:06:27.968083 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 13 21:06:27.968177 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 13 21:06:27.968269 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 13 21:06:27.968361 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 13 21:06:27.968460 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:06:27.968559 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:06:27.968653 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 13 21:06:27.968747 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 13 21:06:27.968881 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 13 21:06:27.969029 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 13 21:06:27.969146 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:06:27.969256 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:06:27.969354 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 13 21:06:27.969451 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 13 21:06:27.969556 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 13 21:06:27.969655 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 13 21:06:27.969759 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 13 21:06:27.969864 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:06:27.969969 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 13 21:06:27.970170 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 13 21:06:27.970268 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 13 21:06:27.970281 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:06:27.970291 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:06:27.970300 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:06:27.970308 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:06:27.970321 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:06:27.970330 kernel: iommu: Default domain type: Translated Jan 13 21:06:27.970338 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:06:27.970347 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:06:27.970356 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:06:27.970365 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:06:27.970373 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 13 21:06:27.970465 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 13 21:06:27.970557 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 13 21:06:27.970653 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:06:27.970667 kernel: vgaarb: loaded Jan 13 21:06:27.970676 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:06:27.970685 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:06:27.970694 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:06:27.970702 kernel: pnp: PnP ACPI init Jan 13 21:06:27.970793 kernel: pnp 00:03: [dma 2] Jan 13 21:06:27.970807 kernel: pnp: PnP ACPI: found 5 devices Jan 13 21:06:27.970816 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:06:27.970829 kernel: NET: Registered PF_INET protocol family Jan 13 21:06:27.970837 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:06:27.970846 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:06:27.970855 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:06:27.970864 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:06:27.970872 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:06:27.970882 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:06:27.970891 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:06:27.970902 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:06:27.970911 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:06:27.970920 kernel: NET: Registered PF_XDP protocol family Jan 13 21:06:27.971044 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:06:27.971131 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:06:27.971212 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:06:27.971292 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 13 21:06:27.971371 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 13 21:06:27.971463 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 13 21:06:27.971561 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:06:27.971574 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:06:27.971583 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:06:27.971593 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 13 21:06:27.971601 kernel: Initialise system trusted keyrings Jan 13 21:06:27.971611 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:06:27.971619 kernel: Key type asymmetric registered Jan 13 21:06:27.971628 kernel: Asymmetric key parser 'x509' registered Jan 13 21:06:27.971640 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:06:27.971649 kernel: io scheduler mq-deadline registered Jan 13 21:06:27.971658 kernel: io scheduler kyber registered Jan 13 21:06:27.971667 kernel: io scheduler bfq registered Jan 13 21:06:27.971675 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:06:27.971685 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 13 21:06:27.971694 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:06:27.971703 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 21:06:27.971712 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:06:27.971722 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:06:27.971731 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:06:27.971740 kernel: random: crng init done Jan 13 21:06:27.971749 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:06:27.971757 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:06:27.971766 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:06:27.971860 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:06:27.971874 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:06:27.971962 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:06:27.972097 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:06:27 UTC (1736802387) Jan 13 21:06:27.972181 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 21:06:27.972193 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:06:27.972202 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:06:27.972211 kernel: Segment Routing with IPv6 Jan 13 21:06:27.972220 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:06:27.972229 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:06:27.972237 kernel: Key type dns_resolver registered Jan 13 21:06:27.972250 kernel: IPI shorthand broadcast: enabled Jan 13 21:06:27.972259 kernel: sched_clock: Marking stable (1032007916, 170640800)->(1244968198, -42319482) Jan 13 21:06:27.972268 kernel: registered taskstats version 1 Jan 13 21:06:27.972277 kernel: Loading compiled-in X.509 certificates Jan 13 21:06:27.972286 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 21:06:27.972294 kernel: Key type .fscrypt registered Jan 13 21:06:27.972303 kernel: Key type fscrypt-provisioning registered Jan 13 21:06:27.972312 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:06:27.972323 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:06:27.972332 kernel: ima: No architecture policies found Jan 13 21:06:27.972340 kernel: clk: Disabling unused clocks Jan 13 21:06:27.972349 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 21:06:27.972358 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:06:27.972367 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 21:06:27.972376 kernel: Run /init as init process Jan 13 21:06:27.972384 kernel: with arguments: Jan 13 21:06:27.972393 kernel: /init Jan 13 21:06:27.972401 kernel: with environment: Jan 13 21:06:27.972412 kernel: HOME=/ Jan 13 21:06:27.972420 kernel: TERM=linux Jan 13 21:06:27.972429 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:06:27.972440 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:06:27.972452 systemd[1]: Detected virtualization kvm. Jan 13 21:06:27.972461 systemd[1]: Detected architecture x86-64. Jan 13 21:06:27.972471 systemd[1]: Running in initrd. Jan 13 21:06:27.972482 systemd[1]: No hostname configured, using default hostname. Jan 13 21:06:27.972491 systemd[1]: Hostname set to . Jan 13 21:06:27.972501 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:06:27.972511 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:06:27.972520 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:06:27.972530 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:06:27.972540 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:06:27.972560 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:06:27.972572 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:06:27.972582 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:06:27.972595 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:06:27.972606 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:06:27.972619 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:06:27.972630 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:06:27.972640 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:06:27.972650 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:06:27.972661 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:06:27.972671 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:06:27.972682 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:06:27.972692 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:06:27.972703 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:06:27.972716 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:06:27.972726 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:06:27.972737 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:06:27.972747 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:06:27.972758 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:06:27.972768 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:06:27.972779 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:06:27.972789 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:06:27.972817 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:06:27.972838 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:06:27.972856 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:06:27.972896 systemd-journald[185]: Collecting audit messages is disabled. Jan 13 21:06:27.972939 systemd-journald[185]: Journal started Jan 13 21:06:27.972979 systemd-journald[185]: Runtime Journal (/run/log/journal/b2d56632b0294690b7924ace2ed6b536) is 8.0M, max 78.3M, 70.3M free. Jan 13 21:06:27.976032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:06:27.995710 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:06:27.996373 systemd-modules-load[186]: Inserted module 'overlay' Jan 13 21:06:27.997773 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:06:27.999844 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:06:28.003503 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:06:28.019184 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:06:28.061219 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:06:28.061244 kernel: Bridge firewalling registered Jan 13 21:06:28.039173 systemd-modules-load[186]: Inserted module 'br_netfilter' Jan 13 21:06:28.069142 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:06:28.070008 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:06:28.070717 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:06:28.072623 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:06:28.080186 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:06:28.085445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:06:28.089367 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:06:28.098167 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:06:28.104199 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:06:28.109341 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:06:28.110719 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:06:28.113417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:06:28.123712 dracut-cmdline[215]: dracut-dracut-053 Jan 13 21:06:28.124168 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:06:28.128528 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:06:28.165066 systemd-resolved[225]: Positive Trust Anchors: Jan 13 21:06:28.165081 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:06:28.165125 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:06:28.172654 systemd-resolved[225]: Defaulting to hostname 'linux'. Jan 13 21:06:28.174500 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:06:28.175978 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:06:28.217046 kernel: SCSI subsystem initialized Jan 13 21:06:28.229080 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:06:28.241493 kernel: iscsi: registered transport (tcp) Jan 13 21:06:28.265723 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:06:28.265792 kernel: QLogic iSCSI HBA Driver Jan 13 21:06:28.327114 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:06:28.338381 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:06:28.392377 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:06:28.392459 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:06:28.395383 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:06:28.456290 kernel: raid6: sse2x4 gen() 5034 MB/s Jan 13 21:06:28.475104 kernel: raid6: sse2x2 gen() 5882 MB/s Jan 13 21:06:28.493386 kernel: raid6: sse2x1 gen() 8664 MB/s Jan 13 21:06:28.493455 kernel: raid6: using algorithm sse2x1 gen() 8664 MB/s Jan 13 21:06:28.512604 kernel: raid6: .... xor() 7292 MB/s, rmw enabled Jan 13 21:06:28.512673 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 21:06:28.535718 kernel: xor: measuring software checksum speed Jan 13 21:06:28.535781 kernel: prefetch64-sse : 16355 MB/sec Jan 13 21:06:28.536274 kernel: generic_sse : 15720 MB/sec Jan 13 21:06:28.538485 kernel: xor: using function: prefetch64-sse (16355 MB/sec) Jan 13 21:06:28.730050 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:06:28.748468 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:06:28.756134 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:06:28.801402 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 13 21:06:28.812081 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:06:28.823283 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:06:28.852840 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 13 21:06:28.899476 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:06:28.908243 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:06:28.954117 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:06:28.967169 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:06:29.012250 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:06:29.015791 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:06:29.018082 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:06:29.018603 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:06:29.028149 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:06:29.040505 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 13 21:06:29.089188 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 13 21:06:29.089324 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:06:29.089347 kernel: GPT:17805311 != 20971519 Jan 13 21:06:29.089360 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:06:29.089371 kernel: GPT:17805311 != 20971519 Jan 13 21:06:29.089382 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:06:29.089393 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:06:29.040008 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:06:29.068284 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:06:29.068424 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:06:29.071133 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:06:29.071645 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:06:29.071775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:06:29.072325 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:06:29.086970 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:06:29.104056 kernel: libata version 3.00 loaded. Jan 13 21:06:29.111270 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 13 21:06:29.113594 kernel: scsi host0: ata_piix Jan 13 21:06:29.113738 kernel: scsi host1: ata_piix Jan 13 21:06:29.113864 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 13 21:06:29.113889 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 13 21:06:29.136012 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (459) Jan 13 21:06:29.141012 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (465) Jan 13 21:06:29.150174 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:06:29.186277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:06:29.192751 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:06:29.198751 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:06:29.203696 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:06:29.204381 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:06:29.215131 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:06:29.219847 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:06:29.229148 disk-uuid[503]: Primary Header is updated. Jan 13 21:06:29.229148 disk-uuid[503]: Secondary Entries is updated. Jan 13 21:06:29.229148 disk-uuid[503]: Secondary Header is updated. Jan 13 21:06:29.241816 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:06:29.263116 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:06:30.258367 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:06:30.258487 disk-uuid[505]: The operation has completed successfully. Jan 13 21:06:30.345026 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:06:30.345147 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:06:30.369203 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:06:30.372385 sh[524]: Success Jan 13 21:06:30.393018 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 13 21:06:30.460135 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:06:30.461601 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:06:30.463614 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:06:30.516649 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 21:06:30.516758 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:06:30.521670 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:06:30.526682 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:06:30.530355 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:06:30.551088 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:06:30.553597 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:06:30.560334 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:06:30.571387 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:06:30.597453 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:06:30.597573 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:06:30.601439 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:06:30.611571 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:06:30.664090 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:06:30.664235 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:06:30.740699 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:06:30.748161 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:06:30.753424 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:06:30.764203 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:06:30.792649 systemd-networkd[706]: lo: Link UP Jan 13 21:06:30.792662 systemd-networkd[706]: lo: Gained carrier Jan 13 21:06:30.793947 systemd-networkd[706]: Enumeration completed Jan 13 21:06:30.794589 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:06:30.795304 systemd[1]: Reached target network.target - Network. Jan 13 21:06:30.795424 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:06:30.795427 systemd-networkd[706]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:06:30.797729 systemd-networkd[706]: eth0: Link UP Jan 13 21:06:30.797733 systemd-networkd[706]: eth0: Gained carrier Jan 13 21:06:30.797745 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:06:30.808037 systemd-networkd[706]: eth0: DHCPv4 address 172.24.4.134/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 21:06:31.218908 ignition[701]: Ignition 2.20.0 Jan 13 21:06:31.218923 ignition[701]: Stage: fetch-offline Jan 13 21:06:31.223042 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:06:31.218969 ignition[701]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:06:31.218981 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:06:31.219116 ignition[701]: parsed url from cmdline: "" Jan 13 21:06:31.219121 ignition[701]: no config URL provided Jan 13 21:06:31.219128 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:06:31.219138 ignition[701]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:06:31.231427 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:06:31.219144 ignition[701]: failed to fetch config: resource requires networking Jan 13 21:06:31.220399 ignition[701]: Ignition finished successfully Jan 13 21:06:31.257142 ignition[719]: Ignition 2.20.0 Jan 13 21:06:31.257173 ignition[719]: Stage: fetch Jan 13 21:06:31.257549 ignition[719]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:06:31.257575 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:06:31.257759 ignition[719]: parsed url from cmdline: "" Jan 13 21:06:31.257768 ignition[719]: no config URL provided Jan 13 21:06:31.257781 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:06:31.257799 ignition[719]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:06:31.257965 ignition[719]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 21:06:31.258546 ignition[719]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 21:06:31.258585 ignition[719]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 21:06:31.498962 ignition[719]: GET result: OK Jan 13 21:06:31.499208 ignition[719]: parsing config with SHA512: 49d121c126102cd90920f17f0773f6de0e5fe5dec4836bf04cd7d6e7e29bd6e65d5ccf1ef7e5420528470242e48d41db41fc17c356952a564c1fa2658d6f58eb Jan 13 21:06:31.513567 unknown[719]: fetched base config from "system" Jan 13 21:06:31.513612 unknown[719]: fetched base config from "system" Jan 13 21:06:31.516500 ignition[719]: fetch: fetch complete Jan 13 21:06:31.513634 unknown[719]: fetched user config from "openstack" Jan 13 21:06:31.516516 ignition[719]: fetch: fetch passed Jan 13 21:06:31.522519 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:06:31.516650 ignition[719]: Ignition finished successfully Jan 13 21:06:31.532436 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:06:31.566531 ignition[725]: Ignition 2.20.0 Jan 13 21:06:31.567089 ignition[725]: Stage: kargs Jan 13 21:06:31.567498 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:06:31.567524 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:06:31.575041 ignition[725]: kargs: kargs passed Jan 13 21:06:31.575151 ignition[725]: Ignition finished successfully Jan 13 21:06:31.578120 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:06:31.595341 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:06:31.624852 ignition[731]: Ignition 2.20.0 Jan 13 21:06:31.624881 ignition[731]: Stage: disks Jan 13 21:06:31.625401 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:06:31.625428 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:06:31.630419 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:06:31.628079 ignition[731]: disks: disks passed Jan 13 21:06:31.634179 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:06:31.628192 ignition[731]: Ignition finished successfully Jan 13 21:06:31.636174 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:06:31.638762 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:06:31.641799 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:06:31.644351 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:06:31.654260 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:06:31.694466 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:06:31.706424 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:06:31.715289 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:06:31.863078 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 21:06:31.863744 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:06:31.864408 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:06:31.872205 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:06:31.875629 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:06:31.879394 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:06:31.884379 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 21:06:31.893822 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (747) Jan 13 21:06:31.893876 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:06:31.893907 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:06:31.893948 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:06:31.891219 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:06:31.891251 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:06:31.899202 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:06:31.907046 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:06:31.908181 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:06:31.919373 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:06:32.055238 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:06:32.062894 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:06:32.071347 initrd-setup-root[790]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:06:32.082515 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:06:32.208297 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:06:32.218161 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:06:32.223666 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:06:32.233064 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:06:32.233074 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:06:32.262932 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:06:32.265171 ignition[864]: INFO : Ignition 2.20.0 Jan 13 21:06:32.265171 ignition[864]: INFO : Stage: mount Jan 13 21:06:32.266495 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:06:32.266495 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:06:32.266495 ignition[864]: INFO : mount: mount passed Jan 13 21:06:32.269140 ignition[864]: INFO : Ignition finished successfully Jan 13 21:06:32.269323 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:06:32.408518 systemd-networkd[706]: eth0: Gained IPv6LL Jan 13 21:06:39.163648 coreos-metadata[749]: Jan 13 21:06:39.163 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:06:39.207176 coreos-metadata[749]: Jan 13 21:06:39.207 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:06:39.222713 coreos-metadata[749]: Jan 13 21:06:39.222 INFO Fetch successful Jan 13 21:06:39.222713 coreos-metadata[749]: Jan 13 21:06:39.222 INFO wrote hostname ci-4152-2-0-e-56a5643f90.novalocal to /sysroot/etc/hostname Jan 13 21:06:39.226506 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 21:06:39.226712 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 21:06:39.238245 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:06:39.271435 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:06:39.289062 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (882) Jan 13 21:06:39.296708 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:06:39.296824 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:06:39.303303 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:06:39.312123 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:06:39.316964 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:06:39.355102 ignition[900]: INFO : Ignition 2.20.0 Jan 13 21:06:39.355102 ignition[900]: INFO : Stage: files Jan 13 21:06:39.357767 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:06:39.357767 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:06:39.357767 ignition[900]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:06:39.362438 ignition[900]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:06:39.362438 ignition[900]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:06:39.366523 ignition[900]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:06:39.366523 ignition[900]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:06:39.366523 ignition[900]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:06:39.364669 unknown[900]: wrote ssh authorized keys file for user: core Jan 13 21:06:39.370775 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:06:39.370775 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:06:39.370775 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:06:39.370775 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:06:39.421794 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:06:39.738338 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:06:39.738338 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:06:39.738338 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 21:06:40.292467 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 13 21:06:40.754700 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:06:40.754700 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:06:40.759861 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:06:40.759861 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:06:40.759861 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:06:40.759861 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:06:40.759861 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:06:40.759861 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:06:40.759861 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:06:40.759861 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:06:40.759861 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:06:40.759861 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:06:40.759861 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:06:40.759861 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:06:40.759861 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:06:41.256367 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 13 21:06:42.895865 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:06:42.895865 ignition[900]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 13 21:06:42.901040 ignition[900]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:06:42.901040 ignition[900]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:06:42.901040 ignition[900]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 13 21:06:42.901040 ignition[900]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 13 21:06:42.901040 ignition[900]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:06:42.901040 ignition[900]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:06:42.901040 ignition[900]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 13 21:06:42.901040 ignition[900]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:06:42.901040 ignition[900]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:06:42.901040 ignition[900]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:06:42.901040 ignition[900]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:06:42.901040 ignition[900]: INFO : files: files passed Jan 13 21:06:42.901040 ignition[900]: INFO : Ignition finished successfully Jan 13 21:06:42.899366 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:06:42.912629 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:06:42.916144 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:06:42.918656 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:06:42.918752 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:06:42.945186 initrd-setup-root-after-ignition[929]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:06:42.945186 initrd-setup-root-after-ignition[929]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:06:42.948289 initrd-setup-root-after-ignition[933]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:06:42.950755 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:06:42.954212 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:06:42.970334 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:06:43.017897 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:06:43.018186 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:06:43.021357 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:06:43.022769 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:06:43.025421 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:06:43.031348 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:06:43.052667 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:06:43.061286 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:06:43.079682 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:06:43.080440 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:06:43.082828 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:06:43.085847 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:06:43.086205 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:06:43.089510 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:06:43.091561 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:06:43.094505 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:06:43.097208 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:06:43.099641 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:06:43.102647 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:06:43.105678 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:06:43.108975 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:06:43.111860 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:06:43.114912 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:06:43.117773 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:06:43.118089 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:06:43.121326 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:06:43.123347 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:06:43.126063 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:06:43.126341 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:06:43.129344 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:06:43.129720 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:06:43.133648 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:06:43.134111 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:06:43.135938 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:06:43.136377 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:06:43.148242 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:06:43.162894 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:06:43.166161 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:06:43.166474 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:06:43.169306 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:06:43.169598 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:06:43.177387 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:06:43.178029 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:06:43.197021 ignition[953]: INFO : Ignition 2.20.0 Jan 13 21:06:43.197021 ignition[953]: INFO : Stage: umount Jan 13 21:06:43.197021 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:06:43.197021 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:06:43.201186 ignition[953]: INFO : umount: umount passed Jan 13 21:06:43.201186 ignition[953]: INFO : Ignition finished successfully Jan 13 21:06:43.200570 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:06:43.200673 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:06:43.203545 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:06:43.203608 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:06:43.204981 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:06:43.205042 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:06:43.205521 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:06:43.205559 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:06:43.207202 systemd[1]: Stopped target network.target - Network. Jan 13 21:06:43.208882 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:06:43.208965 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:06:43.210144 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:06:43.211121 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:06:43.213319 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:06:43.214182 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:06:43.215144 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:06:43.215636 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:06:43.215674 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:06:43.216757 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:06:43.216807 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:06:43.218045 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:06:43.218092 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:06:43.219063 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:06:43.219106 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:06:43.220212 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:06:43.221252 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:06:43.223699 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:06:43.225939 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:06:43.226079 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:06:43.226322 systemd-networkd[706]: eth0: DHCPv6 lease lost Jan 13 21:06:43.228307 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:06:43.228413 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:06:43.231279 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:06:43.231391 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:06:43.236793 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:06:43.236844 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:06:43.238164 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:06:43.238212 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:06:43.244134 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:06:43.244711 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:06:43.244787 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:06:43.248337 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:06:43.248388 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:06:43.249686 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:06:43.249732 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:06:43.250833 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:06:43.250876 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:06:43.252196 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:06:43.261497 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:06:43.261646 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:06:43.263475 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:06:43.263521 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:06:43.264795 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:06:43.264827 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:06:43.266013 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:06:43.266075 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:06:43.267722 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:06:43.267763 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:06:43.268955 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:06:43.269043 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:06:43.278166 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:06:43.278945 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:06:43.279024 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:06:43.279570 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:06:43.279611 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:06:43.280150 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:06:43.280189 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:06:43.280813 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:06:43.280853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:06:43.281643 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:06:43.281740 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:06:43.287486 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:06:43.287596 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:06:43.288542 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:06:43.295188 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:06:43.304522 systemd[1]: Switching root. Jan 13 21:06:43.353981 systemd-journald[185]: Journal stopped Jan 13 21:06:45.248260 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jan 13 21:06:45.248350 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:06:45.248374 kernel: SELinux: policy capability open_perms=1 Jan 13 21:06:45.248385 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:06:45.248401 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:06:45.248412 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:06:45.248423 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:06:45.248434 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:06:45.248445 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:06:45.248456 kernel: audit: type=1403 audit(1736802404.199:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:06:45.248473 systemd[1]: Successfully loaded SELinux policy in 83.298ms. Jan 13 21:06:45.248493 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.077ms. Jan 13 21:06:45.248507 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:06:45.248519 systemd[1]: Detected virtualization kvm. Jan 13 21:06:45.248534 systemd[1]: Detected architecture x86-64. Jan 13 21:06:45.248546 systemd[1]: Detected first boot. Jan 13 21:06:45.248558 systemd[1]: Hostname set to . Jan 13 21:06:45.248573 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:06:45.248585 zram_generator::config[1015]: No configuration found. Jan 13 21:06:45.248597 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:06:45.248609 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:06:45.248620 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:06:45.248633 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:06:45.248645 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:06:45.248656 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:06:45.248670 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:06:45.248682 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:06:45.248694 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:06:45.248706 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:06:45.248717 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:06:45.248729 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:06:45.248741 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:06:45.248753 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:06:45.248786 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:06:45.248802 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:06:45.248815 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:06:45.248827 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:06:45.248839 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:06:45.248851 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:06:45.248863 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:06:45.248875 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:06:45.248889 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:06:45.248901 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:06:45.248913 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:06:45.248925 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:06:45.248937 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:06:45.248949 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:06:45.248961 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:06:45.248972 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:06:45.248997 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:06:45.249013 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:06:45.249026 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:06:45.249038 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:06:45.249049 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:06:45.249061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:06:45.249072 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:06:45.249086 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:06:45.249097 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:06:45.249112 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:06:45.249124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:06:45.249136 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:06:45.249148 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:06:45.249160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:06:45.249171 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:06:45.249183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:06:45.249194 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:06:45.249206 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:06:45.252728 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:06:45.252771 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 21:06:45.252786 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 21:06:45.252798 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:06:45.252810 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:06:45.252822 kernel: fuse: init (API version 7.39) Jan 13 21:06:45.252835 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:06:45.252847 kernel: loop: module loaded Jan 13 21:06:45.252858 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:06:45.252875 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:06:45.252887 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:06:45.252899 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:06:45.252944 systemd-journald[1119]: Collecting audit messages is disabled. Jan 13 21:06:45.252973 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:06:45.253001 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:06:45.253013 kernel: ACPI: bus type drm_connector registered Jan 13 21:06:45.253028 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:06:45.253040 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:06:45.253053 systemd-journald[1119]: Journal started Jan 13 21:06:45.253078 systemd-journald[1119]: Runtime Journal (/run/log/journal/b2d56632b0294690b7924ace2ed6b536) is 8.0M, max 78.3M, 70.3M free. Jan 13 21:06:45.258081 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:06:45.259693 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:06:45.260530 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:06:45.262348 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:06:45.262526 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:06:45.263347 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:06:45.263507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:06:45.265335 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:06:45.265508 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:06:45.266298 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:06:45.266454 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:06:45.267365 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:06:45.267531 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:06:45.269458 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:06:45.269665 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:06:45.270481 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:06:45.273376 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:06:45.274255 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:06:45.290631 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:06:45.298088 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:06:45.306084 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:06:45.306734 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:06:45.319206 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:06:45.324225 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:06:45.325137 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:06:45.326522 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:06:45.330120 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:06:45.339225 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:06:45.349912 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:06:45.357362 systemd-journald[1119]: Time spent on flushing to /var/log/journal/b2d56632b0294690b7924ace2ed6b536 is 35.007ms for 934 entries. Jan 13 21:06:45.357362 systemd-journald[1119]: System Journal (/var/log/journal/b2d56632b0294690b7924ace2ed6b536) is 8.0M, max 584.8M, 576.8M free. Jan 13 21:06:45.416591 systemd-journald[1119]: Received client request to flush runtime journal. Jan 13 21:06:45.360073 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:06:45.361953 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:06:45.362713 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:06:45.365538 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:06:45.371162 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:06:45.375639 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:06:45.387149 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:06:45.397402 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:06:45.401924 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:06:45.419468 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:06:45.423074 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 13 21:06:45.423095 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 13 21:06:45.433614 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:06:45.440242 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:06:45.681704 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:06:45.692215 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:06:45.709264 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 13 21:06:45.709289 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 13 21:06:45.713975 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:06:46.404308 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:06:46.419421 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:06:46.442844 systemd-udevd[1199]: Using default interface naming scheme 'v255'. Jan 13 21:06:46.498283 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:06:46.515749 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:06:46.539421 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:06:46.580193 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 13 21:06:46.628024 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:06:46.635855 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 13 21:06:46.642085 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:06:46.658235 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:06:46.661025 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 21:06:46.694338 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1215) Jan 13 21:06:46.784032 systemd-networkd[1209]: lo: Link UP Jan 13 21:06:46.784042 systemd-networkd[1209]: lo: Gained carrier Jan 13 21:06:46.786317 systemd-networkd[1209]: Enumeration completed Jan 13 21:06:46.786467 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:06:46.790817 systemd-networkd[1209]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:06:46.790828 systemd-networkd[1209]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:06:46.793872 systemd-networkd[1209]: eth0: Link UP Jan 13 21:06:46.793884 systemd-networkd[1209]: eth0: Gained carrier Jan 13 21:06:46.793908 systemd-networkd[1209]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:06:46.794276 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:06:46.811023 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:06:46.813755 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:06:46.814446 systemd-networkd[1209]: eth0: DHCPv4 address 172.24.4.134/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 21:06:46.831073 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 13 21:06:46.831162 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 13 21:06:46.832926 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:06:46.833129 kernel: Console: switching to colour dummy device 80x25 Jan 13 21:06:46.836028 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 21:06:46.836089 kernel: [drm] features: -context_init Jan 13 21:06:46.838079 kernel: [drm] number of scanouts: 1 Jan 13 21:06:46.838115 kernel: [drm] number of cap sets: 0 Jan 13 21:06:46.839023 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 13 21:06:46.843035 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 13 21:06:46.850188 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 21:06:46.853024 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 21:06:46.862411 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:06:46.862654 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:06:46.867381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:06:46.878489 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:06:46.881190 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:06:46.908703 lvm[1242]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:06:46.939418 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:06:46.940575 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:06:46.946192 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:06:46.954928 lvm[1248]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:06:46.967467 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:06:46.974288 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:06:46.975039 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:06:46.975173 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:06:46.975209 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:06:46.975299 systemd[1]: Reached target machines.target - Containers. Jan 13 21:06:46.977211 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:06:46.984328 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:06:46.988195 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:06:46.991067 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:06:46.993174 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:06:46.998176 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:06:47.003151 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:06:47.005445 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:06:47.031758 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:06:47.033012 kernel: loop0: detected capacity change from 0 to 211296 Jan 13 21:06:47.060357 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:06:47.064537 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:06:47.102128 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:06:47.130029 kernel: loop1: detected capacity change from 0 to 140992 Jan 13 21:06:47.199368 kernel: loop2: detected capacity change from 0 to 8 Jan 13 21:06:47.225327 kernel: loop3: detected capacity change from 0 to 138184 Jan 13 21:06:47.327105 kernel: loop4: detected capacity change from 0 to 211296 Jan 13 21:06:47.375897 kernel: loop5: detected capacity change from 0 to 140992 Jan 13 21:06:47.441521 kernel: loop6: detected capacity change from 0 to 8 Jan 13 21:06:47.447040 kernel: loop7: detected capacity change from 0 to 138184 Jan 13 21:06:47.479076 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 21:06:47.479560 (sd-merge)[1272]: Merged extensions into '/usr'. Jan 13 21:06:47.484735 systemd[1]: Reloading requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:06:47.484773 systemd[1]: Reloading... Jan 13 21:06:47.556036 zram_generator::config[1296]: No configuration found. Jan 13 21:06:47.795032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:06:47.869967 systemd[1]: Reloading finished in 384 ms. Jan 13 21:06:47.888592 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:06:47.903163 systemd[1]: Starting ensure-sysext.service... Jan 13 21:06:47.909147 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:06:47.926320 systemd[1]: Reloading requested from client PID 1361 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:06:47.926342 systemd[1]: Reloading... Jan 13 21:06:47.952482 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:06:47.952895 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:06:47.953884 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:06:47.954956 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Jan 13 21:06:47.955145 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Jan 13 21:06:47.959927 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:06:47.960199 systemd-tmpfiles[1362]: Skipping /boot Jan 13 21:06:47.970574 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:06:47.970747 systemd-tmpfiles[1362]: Skipping /boot Jan 13 21:06:48.024814 zram_generator::config[1396]: No configuration found. Jan 13 21:06:48.034710 ldconfig[1256]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:06:48.190674 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:06:48.216079 systemd-networkd[1209]: eth0: Gained IPv6LL Jan 13 21:06:48.259956 systemd[1]: Reloading finished in 333 ms. Jan 13 21:06:48.274331 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:06:48.277872 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:06:48.290498 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:06:48.306184 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 21:06:48.322130 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:06:48.329219 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:06:48.343144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:06:48.352118 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:06:48.368403 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:06:48.368810 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:06:48.371830 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:06:48.385787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:06:48.399594 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:06:48.401419 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:06:48.401557 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:06:48.413114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:06:48.413287 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:06:48.418190 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:06:48.418386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:06:48.420825 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:06:48.430944 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:06:48.431426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:06:48.441409 augenrules[1493]: No rules Jan 13 21:06:48.442738 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:06:48.443207 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 21:06:48.453262 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:06:48.453783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:06:48.459160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:06:48.474880 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:06:48.494309 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:06:48.495386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:06:48.500839 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:06:48.504945 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:06:48.508237 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:06:48.511608 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:06:48.511772 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:06:48.513522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:06:48.513871 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:06:48.516022 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:06:48.518298 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:06:48.531287 systemd-resolved[1469]: Positive Trust Anchors: Jan 13 21:06:48.531303 systemd-resolved[1469]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:06:48.531344 systemd-resolved[1469]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:06:48.534264 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:06:48.542748 systemd-resolved[1469]: Using system hostname 'ci-4152-2-0-e-56a5643f90.novalocal'. Jan 13 21:06:48.545208 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:06:48.550504 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:06:48.558137 systemd[1]: Finished ensure-sysext.service. Jan 13 21:06:48.562037 systemd[1]: Reached target network.target - Network. Jan 13 21:06:48.563827 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:06:48.564596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:06:48.565472 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:06:48.572210 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 21:06:48.574181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:06:48.576115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:06:48.591125 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:06:48.595407 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:06:48.601665 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:06:48.604805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:06:48.612024 augenrules[1523]: /sbin/augenrules: No change Jan 13 21:06:48.615596 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:06:48.620938 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:06:48.620979 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:06:48.621632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:06:48.621837 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:06:48.628183 augenrules[1548]: No rules Jan 13 21:06:48.633856 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:06:48.634157 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 21:06:48.635002 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:06:48.635185 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:06:48.635834 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:06:48.635977 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:06:48.636669 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:06:48.636845 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:06:48.650670 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:06:48.650751 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:06:48.708724 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:06:48.710035 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:06:48.711363 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:06:48.713084 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:06:48.714320 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:06:48.716461 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:06:48.716596 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:06:48.718886 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:06:48.722176 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:06:48.725599 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:06:48.728858 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:06:48.732181 systemd-timesyncd[1538]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Jan 13 21:06:48.732221 systemd-timesyncd[1538]: Initial clock synchronization to Mon 2025-01-13 21:06:48.938149 UTC. Jan 13 21:06:48.733680 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:06:48.739659 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:06:48.753265 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:06:48.758372 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:06:48.764417 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:06:48.766446 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:06:48.769175 systemd[1]: System is tainted: cgroupsv1 Jan 13 21:06:48.769306 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:06:48.769405 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:06:48.787235 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:06:48.795644 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:06:48.811375 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:06:48.818876 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:06:48.835320 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:06:48.838051 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:06:48.849287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:06:48.856676 jq[1570]: false Jan 13 21:06:48.859770 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:06:48.868196 dbus-daemon[1569]: [system] SELinux support is enabled Jan 13 21:06:48.873378 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:06:48.882308 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:06:48.893043 extend-filesystems[1571]: Found loop4 Jan 13 21:06:48.893043 extend-filesystems[1571]: Found loop5 Jan 13 21:06:48.893043 extend-filesystems[1571]: Found loop6 Jan 13 21:06:48.893043 extend-filesystems[1571]: Found loop7 Jan 13 21:06:48.893043 extend-filesystems[1571]: Found vda Jan 13 21:06:48.893043 extend-filesystems[1571]: Found vda1 Jan 13 21:06:48.893043 extend-filesystems[1571]: Found vda2 Jan 13 21:06:48.893043 extend-filesystems[1571]: Found vda3 Jan 13 21:06:48.893043 extend-filesystems[1571]: Found usr Jan 13 21:06:48.893043 extend-filesystems[1571]: Found vda4 Jan 13 21:06:48.893043 extend-filesystems[1571]: Found vda6 Jan 13 21:06:48.893043 extend-filesystems[1571]: Found vda7 Jan 13 21:06:48.893043 extend-filesystems[1571]: Found vda9 Jan 13 21:06:48.895144 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:06:48.970782 extend-filesystems[1571]: Checking size of /dev/vda9 Jan 13 21:06:48.970782 extend-filesystems[1571]: Resized partition /dev/vda9 Jan 13 21:06:48.909669 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:06:48.928154 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:06:48.946803 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:06:48.953179 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:06:48.980626 extend-filesystems[1603]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:06:48.994072 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 13 21:06:48.984148 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:06:48.985369 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:06:49.003926 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:06:49.004213 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:06:49.016438 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 13 21:06:49.109544 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1211) Jan 13 21:06:49.109683 update_engine[1601]: I20250113 21:06:49.066291 1601 main.cc:92] Flatcar Update Engine starting Jan 13 21:06:49.109683 update_engine[1601]: I20250113 21:06:49.073073 1601 update_check_scheduler.cc:74] Next update check in 6m28s Jan 13 21:06:49.110116 jq[1604]: true Jan 13 21:06:49.110259 extend-filesystems[1603]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:06:49.110259 extend-filesystems[1603]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:06:49.110259 extend-filesystems[1603]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 13 21:06:49.017089 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:06:49.145575 extend-filesystems[1571]: Resized filesystem in /dev/vda9 Jan 13 21:06:49.017387 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:06:49.146899 jq[1612]: true Jan 13 21:06:49.021611 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:06:49.028586 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:06:49.028859 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:06:49.069368 (ntainerd)[1611]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:06:49.096248 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:06:49.179522 tar[1610]: linux-amd64/helm Jan 13 21:06:49.115700 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:06:49.115767 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:06:49.117497 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:06:49.117518 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:06:49.118887 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:06:49.124972 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:06:49.125881 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:06:49.126158 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:06:49.206473 systemd-logind[1590]: New seat seat0. Jan 13 21:06:49.210081 systemd-logind[1590]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:06:49.210103 systemd-logind[1590]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:06:49.210332 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:06:49.237729 bash[1645]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:06:49.241952 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:06:49.257323 systemd[1]: Starting sshkeys.service... Jan 13 21:06:49.298689 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:06:49.311688 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:06:49.451773 locksmithd[1629]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:06:49.645780 sshd_keygen[1596]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:06:49.684854 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:06:49.701617 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:06:49.712159 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:06:49.712392 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:06:49.728399 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:06:49.747373 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:06:49.760436 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:06:49.777431 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:06:49.779143 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:06:49.869126 containerd[1611]: time="2025-01-13T21:06:49.868851087Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 21:06:49.917272 containerd[1611]: time="2025-01-13T21:06:49.917150810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:49.921207 containerd[1611]: time="2025-01-13T21:06:49.921168048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:06:49.921207 containerd[1611]: time="2025-01-13T21:06:49.921205527Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:06:49.921296 containerd[1611]: time="2025-01-13T21:06:49.921230099Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:06:49.922080 containerd[1611]: time="2025-01-13T21:06:49.921425801Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:06:49.922080 containerd[1611]: time="2025-01-13T21:06:49.921453929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:49.922080 containerd[1611]: time="2025-01-13T21:06:49.921528754Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:06:49.922080 containerd[1611]: time="2025-01-13T21:06:49.921546800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:49.922080 containerd[1611]: time="2025-01-13T21:06:49.921780619Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:06:49.922080 containerd[1611]: time="2025-01-13T21:06:49.921799087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:49.922080 containerd[1611]: time="2025-01-13T21:06:49.921815272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:06:49.922080 containerd[1611]: time="2025-01-13T21:06:49.921827358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:49.922080 containerd[1611]: time="2025-01-13T21:06:49.921914156Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:49.922312 containerd[1611]: time="2025-01-13T21:06:49.922180594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:49.922336 containerd[1611]: time="2025-01-13T21:06:49.922319167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:06:49.922364 containerd[1611]: time="2025-01-13T21:06:49.922335569Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:06:49.922449 containerd[1611]: time="2025-01-13T21:06:49.922426210Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:06:49.922507 containerd[1611]: time="2025-01-13T21:06:49.922485015Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:06:49.933709 containerd[1611]: time="2025-01-13T21:06:49.933651542Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:06:49.933774 containerd[1611]: time="2025-01-13T21:06:49.933720201Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:06:49.933774 containerd[1611]: time="2025-01-13T21:06:49.933742646Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:06:49.933774 containerd[1611]: time="2025-01-13T21:06:49.933762450Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:06:49.933854 containerd[1611]: time="2025-01-13T21:06:49.933784123Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:06:49.934040 containerd[1611]: time="2025-01-13T21:06:49.933968932Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934358866Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934472559Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934493369Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934510439Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934526861Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934549707Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934565503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934581688Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934599107Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934614358Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934630924Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934647090Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934670049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935035 containerd[1611]: time="2025-01-13T21:06:49.934685022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934699040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934716243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934730949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934745820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934760156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934774533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934797451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934816144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934829473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934842854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934856758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934873890Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934895729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934910814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935343 containerd[1611]: time="2025-01-13T21:06:49.934923681Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:06:49.935661 containerd[1611]: time="2025-01-13T21:06:49.934970400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:06:49.935661 containerd[1611]: time="2025-01-13T21:06:49.934988405Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:06:49.935661 containerd[1611]: time="2025-01-13T21:06:49.935042420Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:06:49.935661 containerd[1611]: time="2025-01-13T21:06:49.935060671Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:06:49.935661 containerd[1611]: time="2025-01-13T21:06:49.935072326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935661 containerd[1611]: time="2025-01-13T21:06:49.935086015Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:06:49.935661 containerd[1611]: time="2025-01-13T21:06:49.935097277Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:06:49.935661 containerd[1611]: time="2025-01-13T21:06:49.935108161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:06:49.935832 containerd[1611]: time="2025-01-13T21:06:49.935424523Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:06:49.935832 containerd[1611]: time="2025-01-13T21:06:49.935483450Z" level=info msg="Connect containerd service" Jan 13 21:06:49.935832 containerd[1611]: time="2025-01-13T21:06:49.935519614Z" level=info msg="using legacy CRI server" Jan 13 21:06:49.935832 containerd[1611]: time="2025-01-13T21:06:49.935527538Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:06:49.935832 containerd[1611]: time="2025-01-13T21:06:49.935644993Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:06:49.940610 containerd[1611]: time="2025-01-13T21:06:49.939207490Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:06:49.940924 containerd[1611]: time="2025-01-13T21:06:49.940901463Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:06:49.940977 containerd[1611]: time="2025-01-13T21:06:49.940952518Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:06:49.941075 containerd[1611]: time="2025-01-13T21:06:49.941043735Z" level=info msg="Start subscribing containerd event" Jan 13 21:06:49.941122 containerd[1611]: time="2025-01-13T21:06:49.941085377Z" level=info msg="Start recovering state" Jan 13 21:06:49.941164 containerd[1611]: time="2025-01-13T21:06:49.941143061Z" level=info msg="Start event monitor" Jan 13 21:06:49.942123 containerd[1611]: time="2025-01-13T21:06:49.941164827Z" level=info msg="Start snapshots syncer" Jan 13 21:06:49.942123 containerd[1611]: time="2025-01-13T21:06:49.941175598Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:06:49.942123 containerd[1611]: time="2025-01-13T21:06:49.941187343Z" level=info msg="Start streaming server" Jan 13 21:06:49.942123 containerd[1611]: time="2025-01-13T21:06:49.941243177Z" level=info msg="containerd successfully booted in 0.073864s" Jan 13 21:06:49.941836 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:06:49.952969 tar[1610]: linux-amd64/LICENSE Jan 13 21:06:49.952969 tar[1610]: linux-amd64/README.md Jan 13 21:06:49.965975 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:06:51.242153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:06:51.251101 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:06:52.810895 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:06:52.828793 systemd[1]: Started sshd@0-172.24.4.134:22-172.24.4.1:46552.service - OpenSSH per-connection server daemon (172.24.4.1:46552). Jan 13 21:06:52.911244 kubelet[1698]: E0113 21:06:52.910996 1698 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:06:52.916759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:06:52.917883 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:06:54.118667 sshd[1707]: Accepted publickey for core from 172.24.4.1 port 46552 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:06:54.123689 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:06:54.149891 systemd-logind[1590]: New session 1 of user core. Jan 13 21:06:54.154175 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:06:54.172770 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:06:54.203760 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:06:54.219856 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:06:54.240240 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:06:54.398802 systemd[1715]: Queued start job for default target default.target. Jan 13 21:06:54.399576 systemd[1715]: Created slice app.slice - User Application Slice. Jan 13 21:06:54.399597 systemd[1715]: Reached target paths.target - Paths. Jan 13 21:06:54.399612 systemd[1715]: Reached target timers.target - Timers. Jan 13 21:06:54.405069 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:06:54.432541 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:06:54.432608 systemd[1715]: Reached target sockets.target - Sockets. Jan 13 21:06:54.432623 systemd[1715]: Reached target basic.target - Basic System. Jan 13 21:06:54.432664 systemd[1715]: Reached target default.target - Main User Target. Jan 13 21:06:54.432691 systemd[1715]: Startup finished in 179ms. Jan 13 21:06:54.432822 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:06:54.453373 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:06:54.860370 login[1677]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:06:54.862694 systemd[1]: Started sshd@1-172.24.4.134:22-172.24.4.1:56760.service - OpenSSH per-connection server daemon (172.24.4.1:56760). Jan 13 21:06:54.875812 systemd-logind[1590]: New session 2 of user core. Jan 13 21:06:54.876943 login[1678]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:06:54.884764 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:06:54.898445 systemd-logind[1590]: New session 3 of user core. Jan 13 21:06:54.908382 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:06:55.894707 coreos-metadata[1566]: Jan 13 21:06:55.894 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:06:55.943254 coreos-metadata[1566]: Jan 13 21:06:55.943 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 21:06:56.206583 coreos-metadata[1566]: Jan 13 21:06:56.206 INFO Fetch successful Jan 13 21:06:56.206583 coreos-metadata[1566]: Jan 13 21:06:56.206 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:06:56.221922 coreos-metadata[1566]: Jan 13 21:06:56.221 INFO Fetch successful Jan 13 21:06:56.221922 coreos-metadata[1566]: Jan 13 21:06:56.221 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 21:06:56.242071 coreos-metadata[1566]: Jan 13 21:06:56.241 INFO Fetch successful Jan 13 21:06:56.242071 coreos-metadata[1566]: Jan 13 21:06:56.242 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 21:06:56.257133 coreos-metadata[1566]: Jan 13 21:06:56.257 INFO Fetch successful Jan 13 21:06:56.257133 coreos-metadata[1566]: Jan 13 21:06:56.257 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 21:06:56.273132 coreos-metadata[1566]: Jan 13 21:06:56.272 INFO Fetch successful Jan 13 21:06:56.273132 coreos-metadata[1566]: Jan 13 21:06:56.272 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 21:06:56.289180 coreos-metadata[1566]: Jan 13 21:06:56.289 INFO Fetch successful Jan 13 21:06:56.332817 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:06:56.335467 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:06:56.433609 coreos-metadata[1649]: Jan 13 21:06:56.433 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:06:56.476441 coreos-metadata[1649]: Jan 13 21:06:56.475 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 21:06:56.495314 coreos-metadata[1649]: Jan 13 21:06:56.495 INFO Fetch successful Jan 13 21:06:56.495314 coreos-metadata[1649]: Jan 13 21:06:56.495 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:06:56.510433 coreos-metadata[1649]: Jan 13 21:06:56.510 INFO Fetch successful Jan 13 21:06:56.515500 unknown[1649]: wrote ssh authorized keys file for user: core Jan 13 21:06:56.555541 update-ssh-keys[1769]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:06:56.556584 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:06:56.565300 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:06:56.564695 systemd[1]: Finished sshkeys.service. Jan 13 21:06:56.568954 sshd[1731]: Accepted publickey for core from 172.24.4.1 port 56760 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:06:56.577927 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:06:56.578411 systemd[1]: Startup finished in 17.625s (kernel) + 12.460s (userspace) = 30.086s. Jan 13 21:06:56.589368 systemd-logind[1590]: New session 4 of user core. Jan 13 21:06:56.598543 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:06:57.492241 sshd[1776]: Connection closed by 172.24.4.1 port 56760 Jan 13 21:06:57.494374 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jan 13 21:06:57.512686 systemd[1]: Started sshd@2-172.24.4.134:22-172.24.4.1:56776.service - OpenSSH per-connection server daemon (172.24.4.1:56776). Jan 13 21:06:57.516537 systemd[1]: sshd@1-172.24.4.134:22-172.24.4.1:56760.service: Deactivated successfully. Jan 13 21:06:57.522645 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:06:57.527397 systemd-logind[1590]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:06:57.530109 systemd-logind[1590]: Removed session 4. Jan 13 21:06:58.810802 sshd[1778]: Accepted publickey for core from 172.24.4.1 port 56776 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:06:58.813676 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:06:58.823602 systemd-logind[1590]: New session 5 of user core. Jan 13 21:06:58.832513 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:06:59.456045 sshd[1784]: Connection closed by 172.24.4.1 port 56776 Jan 13 21:06:59.456854 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Jan 13 21:06:59.465603 systemd[1]: Started sshd@3-172.24.4.134:22-172.24.4.1:56786.service - OpenSSH per-connection server daemon (172.24.4.1:56786). Jan 13 21:06:59.467724 systemd[1]: sshd@2-172.24.4.134:22-172.24.4.1:56776.service: Deactivated successfully. Jan 13 21:06:59.475161 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:06:59.478392 systemd-logind[1590]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:06:59.482511 systemd-logind[1590]: Removed session 5. Jan 13 21:07:01.044558 sshd[1786]: Accepted publickey for core from 172.24.4.1 port 56786 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:07:01.047250 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:01.056717 systemd-logind[1590]: New session 6 of user core. Jan 13 21:07:01.065629 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:07:01.631775 sshd[1792]: Connection closed by 172.24.4.1 port 56786 Jan 13 21:07:01.633372 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:01.647422 systemd[1]: Started sshd@4-172.24.4.134:22-172.24.4.1:56798.service - OpenSSH per-connection server daemon (172.24.4.1:56798). Jan 13 21:07:01.649749 systemd[1]: sshd@3-172.24.4.134:22-172.24.4.1:56786.service: Deactivated successfully. Jan 13 21:07:01.653881 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:07:01.658754 systemd-logind[1590]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:07:01.664261 systemd-logind[1590]: Removed session 6. Jan 13 21:07:03.041525 sshd[1795]: Accepted publickey for core from 172.24.4.1 port 56798 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:07:03.044166 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:03.046561 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:07:03.056384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:03.063346 systemd-logind[1590]: New session 7 of user core. Jan 13 21:07:03.067240 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:07:03.389366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:03.409974 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:07:03.498665 kubelet[1813]: E0113 21:07:03.498535 1813 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:07:03.502313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:07:03.502676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:07:03.519536 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:07:03.520491 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:07:03.536135 sudo[1820]: pam_unix(sudo:session): session closed for user root Jan 13 21:07:03.690163 sshd[1804]: Connection closed by 172.24.4.1 port 56798 Jan 13 21:07:03.692529 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:03.704922 systemd[1]: Started sshd@5-172.24.4.134:22-172.24.4.1:55988.service - OpenSSH per-connection server daemon (172.24.4.1:55988). Jan 13 21:07:03.706302 systemd[1]: sshd@4-172.24.4.134:22-172.24.4.1:56798.service: Deactivated successfully. Jan 13 21:07:03.711881 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:07:03.714302 systemd-logind[1590]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:07:03.719346 systemd-logind[1590]: Removed session 7. Jan 13 21:07:05.006143 sshd[1826]: Accepted publickey for core from 172.24.4.1 port 55988 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:07:05.008948 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:05.019959 systemd-logind[1590]: New session 8 of user core. Jan 13 21:07:05.028640 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:07:05.494946 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:07:05.495778 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:07:05.504423 sudo[1833]: pam_unix(sudo:session): session closed for user root Jan 13 21:07:05.516329 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 21:07:05.516985 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:07:05.553719 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 21:07:05.613827 augenrules[1855]: No rules Jan 13 21:07:05.615140 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:07:05.615661 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 21:07:05.619175 sudo[1832]: pam_unix(sudo:session): session closed for user root Jan 13 21:07:05.808154 sshd[1831]: Connection closed by 172.24.4.1 port 55988 Jan 13 21:07:05.808364 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:05.822168 systemd[1]: Started sshd@6-172.24.4.134:22-172.24.4.1:55998.service - OpenSSH per-connection server daemon (172.24.4.1:55998). Jan 13 21:07:05.826328 systemd[1]: sshd@5-172.24.4.134:22-172.24.4.1:55988.service: Deactivated successfully. Jan 13 21:07:05.836294 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:07:05.840196 systemd-logind[1590]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:07:05.842833 systemd-logind[1590]: Removed session 8. Jan 13 21:07:06.981056 sshd[1861]: Accepted publickey for core from 172.24.4.1 port 55998 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:07:06.983963 sshd-session[1861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:06.993066 systemd-logind[1590]: New session 9 of user core. Jan 13 21:07:07.004546 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:07:07.414872 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:07:07.415719 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:07:07.995699 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:07:07.996173 (dockerd)[1887]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:07:08.540774 dockerd[1887]: time="2025-01-13T21:07:08.540281838Z" level=info msg="Starting up" Jan 13 21:07:08.971376 dockerd[1887]: time="2025-01-13T21:07:08.970044289Z" level=info msg="Loading containers: start." Jan 13 21:07:09.204095 kernel: Initializing XFRM netlink socket Jan 13 21:07:09.299258 systemd-networkd[1209]: docker0: Link UP Jan 13 21:07:09.324268 dockerd[1887]: time="2025-01-13T21:07:09.324199129Z" level=info msg="Loading containers: done." Jan 13 21:07:09.359641 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1825593114-merged.mount: Deactivated successfully. Jan 13 21:07:09.363680 dockerd[1887]: time="2025-01-13T21:07:09.363607813Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:07:09.363846 dockerd[1887]: time="2025-01-13T21:07:09.363804571Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 21:07:09.364096 dockerd[1887]: time="2025-01-13T21:07:09.364058760Z" level=info msg="Daemon has completed initialization" Jan 13 21:07:09.431310 dockerd[1887]: time="2025-01-13T21:07:09.430250243Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:07:09.431289 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:07:11.302834 containerd[1611]: time="2025-01-13T21:07:11.302479628Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:07:12.108343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3540571597.mount: Deactivated successfully. Jan 13 21:07:13.587737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:07:13.596628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:13.720150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:13.724280 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:07:13.847423 kubelet[2145]: E0113 21:07:13.847073 2145 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:07:13.849452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:07:13.849642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:07:14.481216 containerd[1611]: time="2025-01-13T21:07:14.480769933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:14.482777 containerd[1611]: time="2025-01-13T21:07:14.482478666Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Jan 13 21:07:14.483934 containerd[1611]: time="2025-01-13T21:07:14.483867878Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:14.488137 containerd[1611]: time="2025-01-13T21:07:14.487792055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:14.490024 containerd[1611]: time="2025-01-13T21:07:14.489872072Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.187343423s" Jan 13 21:07:14.490024 containerd[1611]: time="2025-01-13T21:07:14.489928660Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 21:07:14.519101 containerd[1611]: time="2025-01-13T21:07:14.518974087Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:07:16.858276 containerd[1611]: time="2025-01-13T21:07:16.857505921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:16.860480 containerd[1611]: time="2025-01-13T21:07:16.859557070Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Jan 13 21:07:16.864885 containerd[1611]: time="2025-01-13T21:07:16.864838302Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:16.871528 containerd[1611]: time="2025-01-13T21:07:16.870391083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:16.871528 containerd[1611]: time="2025-01-13T21:07:16.871406547Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.352314524s" Jan 13 21:07:16.871528 containerd[1611]: time="2025-01-13T21:07:16.871434278Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 21:07:16.909227 containerd[1611]: time="2025-01-13T21:07:16.908966185Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:07:18.569870 containerd[1611]: time="2025-01-13T21:07:18.569783291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:18.572714 containerd[1611]: time="2025-01-13T21:07:18.572638002Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Jan 13 21:07:18.574644 containerd[1611]: time="2025-01-13T21:07:18.574604542Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:18.577928 containerd[1611]: time="2025-01-13T21:07:18.577867045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:18.579420 containerd[1611]: time="2025-01-13T21:07:18.579020726Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.669998398s" Jan 13 21:07:18.579420 containerd[1611]: time="2025-01-13T21:07:18.579058106Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 21:07:18.602606 containerd[1611]: time="2025-01-13T21:07:18.602574213Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:07:19.968704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2726695781.mount: Deactivated successfully. Jan 13 21:07:20.495259 containerd[1611]: time="2025-01-13T21:07:20.495176873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:20.497500 containerd[1611]: time="2025-01-13T21:07:20.497429009Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Jan 13 21:07:20.500099 containerd[1611]: time="2025-01-13T21:07:20.499753780Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:20.504098 containerd[1611]: time="2025-01-13T21:07:20.503949147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:20.506633 containerd[1611]: time="2025-01-13T21:07:20.505864423Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.903091013s" Jan 13 21:07:20.506633 containerd[1611]: time="2025-01-13T21:07:20.505934563Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 21:07:20.560380 containerd[1611]: time="2025-01-13T21:07:20.559945733Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:07:21.214857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866151135.mount: Deactivated successfully. Jan 13 21:07:22.381096 containerd[1611]: time="2025-01-13T21:07:22.380938820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:22.382692 containerd[1611]: time="2025-01-13T21:07:22.382614424Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 13 21:07:22.384130 containerd[1611]: time="2025-01-13T21:07:22.384063353Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:22.388922 containerd[1611]: time="2025-01-13T21:07:22.388871207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:22.391362 containerd[1611]: time="2025-01-13T21:07:22.391234679Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.831245317s" Jan 13 21:07:22.391362 containerd[1611]: time="2025-01-13T21:07:22.391267913Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:07:22.430471 containerd[1611]: time="2025-01-13T21:07:22.430401512Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:07:22.994266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2716231074.mount: Deactivated successfully. Jan 13 21:07:23.006288 containerd[1611]: time="2025-01-13T21:07:23.006070696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:23.008445 containerd[1611]: time="2025-01-13T21:07:23.008337527Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 13 21:07:23.009913 containerd[1611]: time="2025-01-13T21:07:23.009784242Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:23.015811 containerd[1611]: time="2025-01-13T21:07:23.015678940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:23.018114 containerd[1611]: time="2025-01-13T21:07:23.017815883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 586.897537ms" Jan 13 21:07:23.018114 containerd[1611]: time="2025-01-13T21:07:23.017889155Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:07:23.073319 containerd[1611]: time="2025-01-13T21:07:23.073236642Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:07:23.723240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3215771285.mount: Deactivated successfully. Jan 13 21:07:24.090664 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:07:24.103210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:24.426190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:24.430745 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:07:24.652486 kubelet[2269]: E0113 21:07:24.652402 2269 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:07:24.655455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:07:24.656610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:07:27.146810 containerd[1611]: time="2025-01-13T21:07:27.146760867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:27.148597 containerd[1611]: time="2025-01-13T21:07:27.148558947Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 13 21:07:27.149053 containerd[1611]: time="2025-01-13T21:07:27.149012261Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:27.152959 containerd[1611]: time="2025-01-13T21:07:27.152919201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:27.154293 containerd[1611]: time="2025-01-13T21:07:27.154214653Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.080919923s" Jan 13 21:07:27.154293 containerd[1611]: time="2025-01-13T21:07:27.154242543Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 21:07:32.208529 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:32.216224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:32.246851 systemd[1]: Reloading requested from client PID 2376 ('systemctl') (unit session-9.scope)... Jan 13 21:07:32.246873 systemd[1]: Reloading... Jan 13 21:07:32.355025 zram_generator::config[2415]: No configuration found. Jan 13 21:07:32.524627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:07:32.601936 systemd[1]: Reloading finished in 354 ms. Jan 13 21:07:32.643515 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:07:32.643587 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:07:32.644052 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:32.648341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:32.770332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:32.795379 (kubelet)[2492]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:07:33.007024 kubelet[2492]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:07:33.007024 kubelet[2492]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:07:33.007024 kubelet[2492]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:07:33.007024 kubelet[2492]: I0113 21:07:33.006070 2492 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:07:34.054949 kubelet[2492]: I0113 21:07:34.054913 2492 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:07:34.055433 kubelet[2492]: I0113 21:07:34.055418 2492 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:07:34.055757 kubelet[2492]: I0113 21:07:34.055744 2492 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:07:34.076777 kubelet[2492]: E0113 21:07:34.076737 2492 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:34.085922 kubelet[2492]: I0113 21:07:34.085796 2492 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:07:34.104501 kubelet[2492]: I0113 21:07:34.104423 2492 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:07:34.105309 kubelet[2492]: I0113 21:07:34.105262 2492 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:07:34.105772 kubelet[2492]: I0113 21:07:34.105702 2492 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:07:34.107078 kubelet[2492]: I0113 21:07:34.106975 2492 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:07:34.107078 kubelet[2492]: I0113 21:07:34.107063 2492 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:07:34.107332 kubelet[2492]: I0113 21:07:34.107287 2492 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:07:34.107510 kubelet[2492]: I0113 21:07:34.107486 2492 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:07:34.107573 kubelet[2492]: I0113 21:07:34.107540 2492 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:07:34.107635 kubelet[2492]: I0113 21:07:34.107594 2492 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:07:34.107755 kubelet[2492]: I0113 21:07:34.107656 2492 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:07:34.109827 kubelet[2492]: W0113 21:07:34.109778 2492 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-e-56a5643f90.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:34.111003 kubelet[2492]: E0113 21:07:34.109912 2492 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-e-56a5643f90.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:34.111344 kubelet[2492]: W0113 21:07:34.111258 2492 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:34.111399 kubelet[2492]: E0113 21:07:34.111368 2492 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:34.111700 kubelet[2492]: I0113 21:07:34.111665 2492 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 21:07:34.119094 kubelet[2492]: I0113 21:07:34.119044 2492 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:07:34.119196 kubelet[2492]: W0113 21:07:34.119152 2492 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:07:34.120464 kubelet[2492]: I0113 21:07:34.120315 2492 server.go:1256] "Started kubelet" Jan 13 21:07:34.123203 kubelet[2492]: I0113 21:07:34.123158 2492 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:07:34.131251 kubelet[2492]: I0113 21:07:34.130669 2492 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:07:34.131906 kubelet[2492]: E0113 21:07:34.131884 2492 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.134:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.134:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-e-56a5643f90.novalocal.181a5ca14add543b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-e-56a5643f90.novalocal,UID:ci-4152-2-0-e-56a5643f90.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-e-56a5643f90.novalocal,},FirstTimestamp:2025-01-13 21:07:34.120256571 +0000 UTC m=+1.316401821,LastTimestamp:2025-01-13 21:07:34.120256571 +0000 UTC m=+1.316401821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-e-56a5643f90.novalocal,}" Jan 13 21:07:34.132207 kubelet[2492]: I0113 21:07:34.132194 2492 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:07:34.134024 kubelet[2492]: I0113 21:07:34.132688 2492 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:07:34.135322 kubelet[2492]: I0113 21:07:34.135284 2492 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:07:34.136715 kubelet[2492]: I0113 21:07:34.136698 2492 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:07:34.136944 kubelet[2492]: I0113 21:07:34.136933 2492 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:07:34.137271 kubelet[2492]: I0113 21:07:34.137258 2492 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:07:34.140759 kubelet[2492]: W0113 21:07:34.140707 2492 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:34.140894 kubelet[2492]: E0113 21:07:34.140883 2492 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:34.141861 kubelet[2492]: I0113 21:07:34.141844 2492 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:07:34.142049 kubelet[2492]: I0113 21:07:34.142032 2492 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:07:34.144270 kubelet[2492]: E0113 21:07:34.144252 2492 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-e-56a5643f90.novalocal?timeout=10s\": dial tcp 172.24.4.134:6443: connect: connection refused" interval="200ms" Jan 13 21:07:34.147376 kubelet[2492]: I0113 21:07:34.147339 2492 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:07:34.157605 kubelet[2492]: I0113 21:07:34.157579 2492 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:07:34.158680 kubelet[2492]: I0113 21:07:34.158667 2492 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:07:34.158772 kubelet[2492]: I0113 21:07:34.158761 2492 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:07:34.158850 kubelet[2492]: I0113 21:07:34.158841 2492 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:07:34.158976 kubelet[2492]: E0113 21:07:34.158964 2492 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:07:34.166263 kubelet[2492]: E0113 21:07:34.166245 2492 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:07:34.166774 kubelet[2492]: W0113 21:07:34.166727 2492 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:34.166884 kubelet[2492]: E0113 21:07:34.166874 2492 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:34.203038 kubelet[2492]: I0113 21:07:34.202931 2492 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:07:34.203038 kubelet[2492]: I0113 21:07:34.202954 2492 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:07:34.203038 kubelet[2492]: I0113 21:07:34.202969 2492 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:07:34.212208 kubelet[2492]: I0113 21:07:34.212151 2492 policy_none.go:49] "None policy: Start" Jan 13 21:07:34.213674 kubelet[2492]: I0113 21:07:34.213603 2492 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:07:34.213674 kubelet[2492]: I0113 21:07:34.213630 2492 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:07:34.219696 kubelet[2492]: I0113 21:07:34.219633 2492 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:07:34.219905 kubelet[2492]: I0113 21:07:34.219864 2492 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:07:34.224903 kubelet[2492]: E0113 21:07:34.224854 2492 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-0-e-56a5643f90.novalocal\" not found" Jan 13 21:07:34.246796 kubelet[2492]: I0113 21:07:34.246728 2492 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.247660 kubelet[2492]: E0113 21:07:34.247608 2492 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.134:6443/api/v1/nodes\": dial tcp 172.24.4.134:6443: connect: connection refused" node="ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.260190 kubelet[2492]: I0113 21:07:34.260097 2492 topology_manager.go:215] "Topology Admit Handler" podUID="845ee12f25603faea811a85cc13a72fc" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.264395 kubelet[2492]: I0113 21:07:34.263929 2492 topology_manager.go:215] "Topology Admit Handler" podUID="e957d1b7a23d3dc34b983a5730b5c04b" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.268344 kubelet[2492]: I0113 21:07:34.267740 2492 topology_manager.go:215] "Topology Admit Handler" podUID="4a0f0b010055af21b62560eb1df7211c" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.338180 kubelet[2492]: I0113 21:07:34.337932 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e957d1b7a23d3dc34b983a5730b5c04b-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"e957d1b7a23d3dc34b983a5730b5c04b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.338180 kubelet[2492]: I0113 21:07:34.338104 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e957d1b7a23d3dc34b983a5730b5c04b-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"e957d1b7a23d3dc34b983a5730b5c04b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.338180 kubelet[2492]: I0113 21:07:34.338173 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a0f0b010055af21b62560eb1df7211c-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"4a0f0b010055af21b62560eb1df7211c\") " pod="kube-system/kube-scheduler-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.338458 kubelet[2492]: I0113 21:07:34.338240 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/845ee12f25603faea811a85cc13a72fc-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"845ee12f25603faea811a85cc13a72fc\") " pod="kube-system/kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.338458 kubelet[2492]: I0113 21:07:34.338305 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/845ee12f25603faea811a85cc13a72fc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"845ee12f25603faea811a85cc13a72fc\") " pod="kube-system/kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.338458 kubelet[2492]: I0113 21:07:34.338364 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e957d1b7a23d3dc34b983a5730b5c04b-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"e957d1b7a23d3dc34b983a5730b5c04b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.338458 kubelet[2492]: I0113 21:07:34.338420 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/845ee12f25603faea811a85cc13a72fc-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"845ee12f25603faea811a85cc13a72fc\") " pod="kube-system/kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.338638 kubelet[2492]: I0113 21:07:34.338488 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e957d1b7a23d3dc34b983a5730b5c04b-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"e957d1b7a23d3dc34b983a5730b5c04b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.338638 kubelet[2492]: I0113 21:07:34.338554 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e957d1b7a23d3dc34b983a5730b5c04b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"e957d1b7a23d3dc34b983a5730b5c04b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.345671 kubelet[2492]: E0113 21:07:34.345596 2492 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-e-56a5643f90.novalocal?timeout=10s\": dial tcp 172.24.4.134:6443: connect: connection refused" interval="400ms" Jan 13 21:07:34.451957 kubelet[2492]: I0113 21:07:34.451887 2492 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.452866 kubelet[2492]: E0113 21:07:34.452818 2492 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.134:6443/api/v1/nodes\": dial tcp 172.24.4.134:6443: connect: connection refused" node="ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.496748 update_engine[1601]: I20250113 21:07:34.496430 1601 update_attempter.cc:509] Updating boot flags... Jan 13 21:07:34.543096 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2529) Jan 13 21:07:34.589981 containerd[1611]: time="2025-01-13T21:07:34.588723501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal,Uid:e957d1b7a23d3dc34b983a5730b5c04b,Namespace:kube-system,Attempt:0,}" Jan 13 21:07:34.590837 containerd[1611]: time="2025-01-13T21:07:34.590455776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal,Uid:845ee12f25603faea811a85cc13a72fc,Namespace:kube-system,Attempt:0,}" Jan 13 21:07:34.592088 containerd[1611]: time="2025-01-13T21:07:34.590897979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-e-56a5643f90.novalocal,Uid:4a0f0b010055af21b62560eb1df7211c,Namespace:kube-system,Attempt:0,}" Jan 13 21:07:34.746963 kubelet[2492]: E0113 21:07:34.746889 2492 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-e-56a5643f90.novalocal?timeout=10s\": dial tcp 172.24.4.134:6443: connect: connection refused" interval="800ms" Jan 13 21:07:34.858132 kubelet[2492]: I0113 21:07:34.857052 2492 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:34.858132 kubelet[2492]: E0113 21:07:34.857831 2492 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.134:6443/api/v1/nodes\": dial tcp 172.24.4.134:6443: connect: connection refused" node="ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:35.141068 kubelet[2492]: W0113 21:07:35.140725 2492 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:35.141068 kubelet[2492]: E0113 21:07:35.140801 2492 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:35.178254 kubelet[2492]: W0113 21:07:35.176761 2492 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:35.178254 kubelet[2492]: E0113 21:07:35.176887 2492 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:35.207613 kubelet[2492]: W0113 21:07:35.207325 2492 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-e-56a5643f90.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:35.207613 kubelet[2492]: E0113 21:07:35.207483 2492 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-e-56a5643f90.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:35.209624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883081774.mount: Deactivated successfully. Jan 13 21:07:35.221505 containerd[1611]: time="2025-01-13T21:07:35.220741042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:07:35.224149 containerd[1611]: time="2025-01-13T21:07:35.224089121Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 21:07:35.230044 containerd[1611]: time="2025-01-13T21:07:35.230004647Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:07:35.232781 containerd[1611]: time="2025-01-13T21:07:35.232687560Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:07:35.234388 containerd[1611]: time="2025-01-13T21:07:35.234286595Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:07:35.235648 containerd[1611]: time="2025-01-13T21:07:35.235546725Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:07:35.236339 containerd[1611]: time="2025-01-13T21:07:35.236201491Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:07:35.240658 containerd[1611]: time="2025-01-13T21:07:35.240579076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:07:35.243759 containerd[1611]: time="2025-01-13T21:07:35.243174348Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 652.186305ms" Jan 13 21:07:35.245937 containerd[1611]: time="2025-01-13T21:07:35.245651158Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 656.811417ms" Jan 13 21:07:35.246574 containerd[1611]: time="2025-01-13T21:07:35.246496585Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 655.880037ms" Jan 13 21:07:35.256641 kubelet[2492]: W0113 21:07:35.256580 2492 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:35.256641 kubelet[2492]: E0113 21:07:35.256621 2492 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.134:6443: connect: connection refused Jan 13 21:07:35.441881 containerd[1611]: time="2025-01-13T21:07:35.441513883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:07:35.443609 containerd[1611]: time="2025-01-13T21:07:35.443284411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:07:35.443609 containerd[1611]: time="2025-01-13T21:07:35.443309612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:35.443609 containerd[1611]: time="2025-01-13T21:07:35.443457186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:35.458844 containerd[1611]: time="2025-01-13T21:07:35.458658262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:07:35.459209 containerd[1611]: time="2025-01-13T21:07:35.459081201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:07:35.459209 containerd[1611]: time="2025-01-13T21:07:35.459166025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:35.459785 containerd[1611]: time="2025-01-13T21:07:35.459568141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:35.461751 containerd[1611]: time="2025-01-13T21:07:35.461553491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:07:35.461751 containerd[1611]: time="2025-01-13T21:07:35.461645711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:07:35.461751 containerd[1611]: time="2025-01-13T21:07:35.461690092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:35.477186 containerd[1611]: time="2025-01-13T21:07:35.476735928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:35.532969 containerd[1611]: time="2025-01-13T21:07:35.532847568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-e-56a5643f90.novalocal,Uid:4a0f0b010055af21b62560eb1df7211c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9401b95d0efaa6ad9e1839922fff15b4862b20b36e82a2adfdd4e6604beb3a93\"" Jan 13 21:07:35.544163 containerd[1611]: time="2025-01-13T21:07:35.544048424Z" level=info msg="CreateContainer within sandbox \"9401b95d0efaa6ad9e1839922fff15b4862b20b36e82a2adfdd4e6604beb3a93\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:07:35.548063 kubelet[2492]: E0113 21:07:35.548032 2492 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-e-56a5643f90.novalocal?timeout=10s\": dial tcp 172.24.4.134:6443: connect: connection refused" interval="1.6s" Jan 13 21:07:35.560443 containerd[1611]: time="2025-01-13T21:07:35.560408832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal,Uid:e957d1b7a23d3dc34b983a5730b5c04b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c02a3729c01e7396bce90f424ee6f48823b6b5536ebe48f7efcd8f7ff969f04a\"" Jan 13 21:07:35.569318 containerd[1611]: time="2025-01-13T21:07:35.569270841Z" level=info msg="CreateContainer within sandbox \"c02a3729c01e7396bce90f424ee6f48823b6b5536ebe48f7efcd8f7ff969f04a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:07:35.582537 containerd[1611]: time="2025-01-13T21:07:35.582427967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal,Uid:845ee12f25603faea811a85cc13a72fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b5fed8f38240dccb72f4fbaccd7e8a4ca5c4fb0027b8028e8903cdc074d934d\"" Jan 13 21:07:35.585590 containerd[1611]: time="2025-01-13T21:07:35.585503317Z" level=info msg="CreateContainer within sandbox \"9401b95d0efaa6ad9e1839922fff15b4862b20b36e82a2adfdd4e6604beb3a93\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5e3f7779e0747477a4899dd35fae25d8cc1dcc3d231737f40105a83d7c2109ae\"" Jan 13 21:07:35.586421 containerd[1611]: time="2025-01-13T21:07:35.586118761Z" level=info msg="StartContainer for \"5e3f7779e0747477a4899dd35fae25d8cc1dcc3d231737f40105a83d7c2109ae\"" Jan 13 21:07:35.587524 containerd[1611]: time="2025-01-13T21:07:35.587481501Z" level=info msg="CreateContainer within sandbox \"4b5fed8f38240dccb72f4fbaccd7e8a4ca5c4fb0027b8028e8903cdc074d934d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:07:35.608799 containerd[1611]: time="2025-01-13T21:07:35.608749685Z" level=info msg="CreateContainer within sandbox \"c02a3729c01e7396bce90f424ee6f48823b6b5536ebe48f7efcd8f7ff969f04a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"baedd5032ee0572669169a42b6a45d2085bcd950805cf4e33206a8db63ff30f0\"" Jan 13 21:07:35.611459 containerd[1611]: time="2025-01-13T21:07:35.610164732Z" level=info msg="StartContainer for \"baedd5032ee0572669169a42b6a45d2085bcd950805cf4e33206a8db63ff30f0\"" Jan 13 21:07:35.621586 containerd[1611]: time="2025-01-13T21:07:35.621534966Z" level=info msg="CreateContainer within sandbox \"4b5fed8f38240dccb72f4fbaccd7e8a4ca5c4fb0027b8028e8903cdc074d934d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"29e1245a4958cbf374b63501daad9f8b01784a737fdc29189a25ed53e6b12c1b\"" Jan 13 21:07:35.622814 containerd[1611]: time="2025-01-13T21:07:35.622685459Z" level=info msg="StartContainer for \"29e1245a4958cbf374b63501daad9f8b01784a737fdc29189a25ed53e6b12c1b\"" Jan 13 21:07:35.666175 kubelet[2492]: I0113 21:07:35.665819 2492 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:35.666362 kubelet[2492]: E0113 21:07:35.666319 2492 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.134:6443/api/v1/nodes\": dial tcp 172.24.4.134:6443: connect: connection refused" node="ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:35.707735 containerd[1611]: time="2025-01-13T21:07:35.707612961Z" level=info msg="StartContainer for \"5e3f7779e0747477a4899dd35fae25d8cc1dcc3d231737f40105a83d7c2109ae\" returns successfully" Jan 13 21:07:35.741608 containerd[1611]: time="2025-01-13T21:07:35.741539746Z" level=info msg="StartContainer for \"baedd5032ee0572669169a42b6a45d2085bcd950805cf4e33206a8db63ff30f0\" returns successfully" Jan 13 21:07:35.780779 containerd[1611]: time="2025-01-13T21:07:35.780471793Z" level=info msg="StartContainer for \"29e1245a4958cbf374b63501daad9f8b01784a737fdc29189a25ed53e6b12c1b\" returns successfully" Jan 13 21:07:37.271045 kubelet[2492]: I0113 21:07:37.269674 2492 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:37.673095 kubelet[2492]: I0113 21:07:37.673052 2492 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:37.752132 kubelet[2492]: E0113 21:07:37.752094 2492 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 13 21:07:38.113165 kubelet[2492]: I0113 21:07:38.113104 2492 apiserver.go:52] "Watching apiserver" Jan 13 21:07:38.137615 kubelet[2492]: I0113 21:07:38.137560 2492 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:07:40.581814 kubelet[2492]: W0113 21:07:40.581711 2492 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:07:40.783060 systemd[1]: Reloading requested from client PID 2776 ('systemctl') (unit session-9.scope)... Jan 13 21:07:40.783098 systemd[1]: Reloading... Jan 13 21:07:40.905087 zram_generator::config[2815]: No configuration found. Jan 13 21:07:41.067733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:07:41.152867 systemd[1]: Reloading finished in 369 ms. Jan 13 21:07:41.201112 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:41.204018 kubelet[2492]: I0113 21:07:41.201225 2492 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:07:41.219859 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:07:41.220288 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:41.228676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:41.493184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:41.510287 (kubelet)[2889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:07:41.578624 kubelet[2889]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:07:41.579663 kubelet[2889]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:07:41.579663 kubelet[2889]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:07:41.579663 kubelet[2889]: I0113 21:07:41.579148 2889 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:07:41.585756 kubelet[2889]: I0113 21:07:41.585717 2889 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:07:41.585947 kubelet[2889]: I0113 21:07:41.585936 2889 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:07:41.586357 kubelet[2889]: I0113 21:07:41.586342 2889 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:07:41.588338 kubelet[2889]: I0113 21:07:41.588322 2889 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:07:41.590648 kubelet[2889]: I0113 21:07:41.590629 2889 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:07:41.598944 kubelet[2889]: I0113 21:07:41.598628 2889 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:07:41.603190 kubelet[2889]: I0113 21:07:41.600259 2889 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:07:41.603190 kubelet[2889]: I0113 21:07:41.600475 2889 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:07:41.603190 kubelet[2889]: I0113 21:07:41.600503 2889 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:07:41.603190 kubelet[2889]: I0113 21:07:41.600514 2889 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:07:41.603190 kubelet[2889]: I0113 21:07:41.600553 2889 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:07:41.603190 kubelet[2889]: I0113 21:07:41.600668 2889 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:07:41.603721 kubelet[2889]: I0113 21:07:41.600685 2889 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:07:41.603721 kubelet[2889]: I0113 21:07:41.601188 2889 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:07:41.603721 kubelet[2889]: I0113 21:07:41.601209 2889 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:07:41.605980 kubelet[2889]: I0113 21:07:41.605948 2889 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 21:07:41.607289 kubelet[2889]: I0113 21:07:41.607270 2889 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:07:41.609643 kubelet[2889]: I0113 21:07:41.609620 2889 server.go:1256] "Started kubelet" Jan 13 21:07:41.622844 kubelet[2889]: I0113 21:07:41.621819 2889 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:07:41.630451 kubelet[2889]: I0113 21:07:41.630406 2889 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:07:41.635560 kubelet[2889]: I0113 21:07:41.635147 2889 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:07:41.645681 kubelet[2889]: I0113 21:07:41.645340 2889 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:07:41.645902 kubelet[2889]: I0113 21:07:41.645875 2889 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:07:41.647787 kubelet[2889]: I0113 21:07:41.647761 2889 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:07:41.654807 kubelet[2889]: I0113 21:07:41.654776 2889 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:07:41.655140 kubelet[2889]: I0113 21:07:41.655128 2889 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:07:41.661204 kubelet[2889]: I0113 21:07:41.661175 2889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:07:41.662402 kubelet[2889]: I0113 21:07:41.662383 2889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:07:41.662805 kubelet[2889]: I0113 21:07:41.662506 2889 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:07:41.662805 kubelet[2889]: I0113 21:07:41.662532 2889 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:07:41.662805 kubelet[2889]: E0113 21:07:41.662584 2889 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:07:41.682855 kubelet[2889]: E0113 21:07:41.682829 2889 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:07:41.685137 kubelet[2889]: I0113 21:07:41.685021 2889 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:07:41.686265 kubelet[2889]: I0113 21:07:41.686235 2889 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:07:41.688075 kubelet[2889]: I0113 21:07:41.686664 2889 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:07:41.726068 sudo[2918]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:07:41.726955 sudo[2918]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:07:41.758916 kubelet[2889]: I0113 21:07:41.758803 2889 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:41.768422 kubelet[2889]: E0113 21:07:41.768272 2889 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:07:41.789680 kubelet[2889]: I0113 21:07:41.785371 2889 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:41.789680 kubelet[2889]: I0113 21:07:41.785450 2889 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:41.815186 kubelet[2889]: I0113 21:07:41.815160 2889 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:07:41.815366 kubelet[2889]: I0113 21:07:41.815356 2889 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:07:41.815439 kubelet[2889]: I0113 21:07:41.815430 2889 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:07:41.815686 kubelet[2889]: I0113 21:07:41.815627 2889 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:07:41.815841 kubelet[2889]: I0113 21:07:41.815830 2889 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:07:41.815904 kubelet[2889]: I0113 21:07:41.815895 2889 policy_none.go:49] "None policy: Start" Jan 13 21:07:41.817732 kubelet[2889]: I0113 21:07:41.817719 2889 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:07:41.817934 kubelet[2889]: I0113 21:07:41.817924 2889 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:07:41.818221 kubelet[2889]: I0113 21:07:41.818167 2889 state_mem.go:75] "Updated machine memory state" Jan 13 21:07:41.819685 kubelet[2889]: I0113 21:07:41.819671 2889 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:07:41.820547 kubelet[2889]: I0113 21:07:41.820491 2889 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:07:41.970648 kubelet[2889]: I0113 21:07:41.970590 2889 topology_manager.go:215] "Topology Admit Handler" podUID="845ee12f25603faea811a85cc13a72fc" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:41.971177 kubelet[2889]: I0113 21:07:41.971048 2889 topology_manager.go:215] "Topology Admit Handler" podUID="e957d1b7a23d3dc34b983a5730b5c04b" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:41.973157 kubelet[2889]: I0113 21:07:41.972073 2889 topology_manager.go:215] "Topology Admit Handler" podUID="4a0f0b010055af21b62560eb1df7211c" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:41.990200 kubelet[2889]: W0113 21:07:41.989312 2889 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:07:41.990200 kubelet[2889]: E0113 21:07:41.989465 2889 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:41.991744 kubelet[2889]: W0113 21:07:41.991329 2889 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:07:41.992342 kubelet[2889]: W0113 21:07:41.992188 2889 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:07:42.056542 kubelet[2889]: I0113 21:07:42.056356 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/845ee12f25603faea811a85cc13a72fc-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"845ee12f25603faea811a85cc13a72fc\") " pod="kube-system/kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:42.057541 kubelet[2889]: I0113 21:07:42.056978 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/845ee12f25603faea811a85cc13a72fc-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"845ee12f25603faea811a85cc13a72fc\") " pod="kube-system/kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:42.057541 kubelet[2889]: I0113 21:07:42.057149 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/845ee12f25603faea811a85cc13a72fc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"845ee12f25603faea811a85cc13a72fc\") " pod="kube-system/kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:42.057541 kubelet[2889]: I0113 21:07:42.057219 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e957d1b7a23d3dc34b983a5730b5c04b-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"e957d1b7a23d3dc34b983a5730b5c04b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:42.057541 kubelet[2889]: I0113 21:07:42.057281 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e957d1b7a23d3dc34b983a5730b5c04b-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"e957d1b7a23d3dc34b983a5730b5c04b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:42.057837 kubelet[2889]: I0113 21:07:42.057327 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e957d1b7a23d3dc34b983a5730b5c04b-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"e957d1b7a23d3dc34b983a5730b5c04b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:42.057837 kubelet[2889]: I0113 21:07:42.057371 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e957d1b7a23d3dc34b983a5730b5c04b-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"e957d1b7a23d3dc34b983a5730b5c04b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:42.057837 kubelet[2889]: I0113 21:07:42.057421 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e957d1b7a23d3dc34b983a5730b5c04b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"e957d1b7a23d3dc34b983a5730b5c04b\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:42.057837 kubelet[2889]: I0113 21:07:42.057465 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a0f0b010055af21b62560eb1df7211c-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-e-56a5643f90.novalocal\" (UID: \"4a0f0b010055af21b62560eb1df7211c\") " pod="kube-system/kube-scheduler-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:42.383385 sudo[2918]: pam_unix(sudo:session): session closed for user root Jan 13 21:07:42.603063 kubelet[2889]: I0113 21:07:42.602948 2889 apiserver.go:52] "Watching apiserver" Jan 13 21:07:42.655716 kubelet[2889]: I0113 21:07:42.655386 2889 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:07:42.737420 kubelet[2889]: W0113 21:07:42.736422 2889 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:07:42.737420 kubelet[2889]: E0113 21:07:42.736516 2889 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal" Jan 13 21:07:42.779341 kubelet[2889]: I0113 21:07:42.779222 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-0-e-56a5643f90.novalocal" podStartSLOduration=1.778943473 podStartE2EDuration="1.778943473s" podCreationTimestamp="2025-01-13 21:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:07:42.778247116 +0000 UTC m=+1.257273651" watchObservedRunningTime="2025-01-13 21:07:42.778943473 +0000 UTC m=+1.257970018" Jan 13 21:07:42.780532 kubelet[2889]: I0113 21:07:42.780513 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-0-e-56a5643f90.novalocal" podStartSLOduration=1.7804772180000001 podStartE2EDuration="1.780477218s" podCreationTimestamp="2025-01-13 21:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:07:42.7632969 +0000 UTC m=+1.242323445" watchObservedRunningTime="2025-01-13 21:07:42.780477218 +0000 UTC m=+1.259503763" Jan 13 21:07:42.794658 kubelet[2889]: I0113 21:07:42.794622 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-0-e-56a5643f90.novalocal" podStartSLOduration=2.794555044 podStartE2EDuration="2.794555044s" podCreationTimestamp="2025-01-13 21:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:07:42.794418711 +0000 UTC m=+1.273445256" watchObservedRunningTime="2025-01-13 21:07:42.794555044 +0000 UTC m=+1.273581579" Jan 13 21:07:44.649571 sudo[1868]: pam_unix(sudo:session): session closed for user root Jan 13 21:07:44.859208 sshd[1867]: Connection closed by 172.24.4.1 port 55998 Jan 13 21:07:44.860531 sshd-session[1861]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:44.866302 systemd[1]: sshd@6-172.24.4.134:22-172.24.4.1:55998.service: Deactivated successfully. Jan 13 21:07:44.874382 systemd-logind[1590]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:07:44.875651 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:07:44.879512 systemd-logind[1590]: Removed session 9. Jan 13 21:07:54.610669 kubelet[2889]: I0113 21:07:54.610631 2889 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:07:54.613298 containerd[1611]: time="2025-01-13T21:07:54.613264415Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:07:54.614010 kubelet[2889]: I0113 21:07:54.613966 2889 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:07:54.824855 kubelet[2889]: I0113 21:07:54.822236 2889 topology_manager.go:215] "Topology Admit Handler" podUID="0bdb39c5-f754-4d1b-b42e-65e561fdc465" podNamespace="kube-system" podName="cilium-operator-5cc964979-qtqtk" Jan 13 21:07:54.934862 kubelet[2889]: I0113 21:07:54.934601 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bdb39c5-f754-4d1b-b42e-65e561fdc465-cilium-config-path\") pod \"cilium-operator-5cc964979-qtqtk\" (UID: \"0bdb39c5-f754-4d1b-b42e-65e561fdc465\") " pod="kube-system/cilium-operator-5cc964979-qtqtk" Jan 13 21:07:54.934862 kubelet[2889]: I0113 21:07:54.934689 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rs2v\" (UniqueName: \"kubernetes.io/projected/0bdb39c5-f754-4d1b-b42e-65e561fdc465-kube-api-access-2rs2v\") pod \"cilium-operator-5cc964979-qtqtk\" (UID: \"0bdb39c5-f754-4d1b-b42e-65e561fdc465\") " pod="kube-system/cilium-operator-5cc964979-qtqtk" Jan 13 21:07:54.982767 kubelet[2889]: I0113 21:07:54.982712 2889 topology_manager.go:215] "Topology Admit Handler" podUID="ab81228d-e72b-4eb4-9f0c-7d5f729a7b55" podNamespace="kube-system" podName="kube-proxy-97lf7" Jan 13 21:07:55.012239 kubelet[2889]: I0113 21:07:55.012201 2889 topology_manager.go:215] "Topology Admit Handler" podUID="d12c9873-1380-4664-9675-5537e6d7cf4c" podNamespace="kube-system" podName="cilium-sc7bb" Jan 13 21:07:55.132379 containerd[1611]: time="2025-01-13T21:07:55.132312411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qtqtk,Uid:0bdb39c5-f754-4d1b-b42e-65e561fdc465,Namespace:kube-system,Attempt:0,}" Jan 13 21:07:55.137748 kubelet[2889]: I0113 21:07:55.137416 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-host-proc-sys-net\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.137748 kubelet[2889]: I0113 21:07:55.137486 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ab81228d-e72b-4eb4-9f0c-7d5f729a7b55-kube-proxy\") pod \"kube-proxy-97lf7\" (UID: \"ab81228d-e72b-4eb4-9f0c-7d5f729a7b55\") " pod="kube-system/kube-proxy-97lf7" Jan 13 21:07:55.137748 kubelet[2889]: I0113 21:07:55.137528 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-cilium-run\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.137748 kubelet[2889]: I0113 21:07:55.137573 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78wsr\" (UniqueName: \"kubernetes.io/projected/ab81228d-e72b-4eb4-9f0c-7d5f729a7b55-kube-api-access-78wsr\") pod \"kube-proxy-97lf7\" (UID: \"ab81228d-e72b-4eb4-9f0c-7d5f729a7b55\") " pod="kube-system/kube-proxy-97lf7" Jan 13 21:07:55.137748 kubelet[2889]: I0113 21:07:55.137620 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-bpf-maps\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.138202 kubelet[2889]: I0113 21:07:55.137661 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-host-proc-sys-kernel\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.139275 kubelet[2889]: I0113 21:07:55.138321 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d12c9873-1380-4664-9675-5537e6d7cf4c-hubble-tls\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.139275 kubelet[2889]: I0113 21:07:55.138413 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab81228d-e72b-4eb4-9f0c-7d5f729a7b55-xtables-lock\") pod \"kube-proxy-97lf7\" (UID: \"ab81228d-e72b-4eb4-9f0c-7d5f729a7b55\") " pod="kube-system/kube-proxy-97lf7" Jan 13 21:07:55.139275 kubelet[2889]: I0113 21:07:55.138456 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-cilium-cgroup\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.139275 kubelet[2889]: I0113 21:07:55.138495 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-etc-cni-netd\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.139275 kubelet[2889]: I0113 21:07:55.138533 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-hostproc\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.139275 kubelet[2889]: I0113 21:07:55.138574 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d12c9873-1380-4664-9675-5537e6d7cf4c-clustermesh-secrets\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.140616 kubelet[2889]: I0113 21:07:55.138615 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-xtables-lock\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.140616 kubelet[2889]: I0113 21:07:55.138670 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-cni-path\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.140616 kubelet[2889]: I0113 21:07:55.138711 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d12c9873-1380-4664-9675-5537e6d7cf4c-cilium-config-path\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.140616 kubelet[2889]: I0113 21:07:55.138857 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssq9m\" (UniqueName: \"kubernetes.io/projected/d12c9873-1380-4664-9675-5537e6d7cf4c-kube-api-access-ssq9m\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.140616 kubelet[2889]: I0113 21:07:55.138922 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-lib-modules\") pod \"cilium-sc7bb\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " pod="kube-system/cilium-sc7bb" Jan 13 21:07:55.140616 kubelet[2889]: I0113 21:07:55.139036 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab81228d-e72b-4eb4-9f0c-7d5f729a7b55-lib-modules\") pod \"kube-proxy-97lf7\" (UID: \"ab81228d-e72b-4eb4-9f0c-7d5f729a7b55\") " pod="kube-system/kube-proxy-97lf7" Jan 13 21:07:55.189830 containerd[1611]: time="2025-01-13T21:07:55.189564337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:07:55.193525 containerd[1611]: time="2025-01-13T21:07:55.191495694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:07:55.193525 containerd[1611]: time="2025-01-13T21:07:55.193143115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:55.193525 containerd[1611]: time="2025-01-13T21:07:55.193324790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:55.289979 containerd[1611]: time="2025-01-13T21:07:55.289922446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qtqtk,Uid:0bdb39c5-f754-4d1b-b42e-65e561fdc465,Namespace:kube-system,Attempt:0,} returns sandbox id \"2320a23dc39acdee4e1f9085b4dd65725a42367fa2deebb6572f22df31a7a17d\"" Jan 13 21:07:55.292238 containerd[1611]: time="2025-01-13T21:07:55.292120404Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:07:55.301156 containerd[1611]: time="2025-01-13T21:07:55.300926954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-97lf7,Uid:ab81228d-e72b-4eb4-9f0c-7d5f729a7b55,Namespace:kube-system,Attempt:0,}" Jan 13 21:07:55.330240 containerd[1611]: time="2025-01-13T21:07:55.330206060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sc7bb,Uid:d12c9873-1380-4664-9675-5537e6d7cf4c,Namespace:kube-system,Attempt:0,}" Jan 13 21:07:55.339276 containerd[1611]: time="2025-01-13T21:07:55.339017060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:07:55.339276 containerd[1611]: time="2025-01-13T21:07:55.339083079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:07:55.339276 containerd[1611]: time="2025-01-13T21:07:55.339115673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:55.339276 containerd[1611]: time="2025-01-13T21:07:55.339223864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:55.366353 containerd[1611]: time="2025-01-13T21:07:55.365600513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:07:55.366478 containerd[1611]: time="2025-01-13T21:07:55.366392331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:07:55.366478 containerd[1611]: time="2025-01-13T21:07:55.366430917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:55.366669 containerd[1611]: time="2025-01-13T21:07:55.366623754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:55.398286 containerd[1611]: time="2025-01-13T21:07:55.398236513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-97lf7,Uid:ab81228d-e72b-4eb4-9f0c-7d5f729a7b55,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bd91e101ae7b9582af9363fc4b41f108de313c23fd8f4eb943d373deb849218\"" Jan 13 21:07:55.404648 containerd[1611]: time="2025-01-13T21:07:55.404567913Z" level=info msg="CreateContainer within sandbox \"5bd91e101ae7b9582af9363fc4b41f108de313c23fd8f4eb943d373deb849218\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:07:55.412703 containerd[1611]: time="2025-01-13T21:07:55.412670317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sc7bb,Uid:d12c9873-1380-4664-9675-5537e6d7cf4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\"" Jan 13 21:07:55.434339 containerd[1611]: time="2025-01-13T21:07:55.434238594Z" level=info msg="CreateContainer within sandbox \"5bd91e101ae7b9582af9363fc4b41f108de313c23fd8f4eb943d373deb849218\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6fa81a5ff4ceab87f62c33f0be5b4db832385c98ed0a86eec3c32629d33f7f6d\"" Jan 13 21:07:55.436538 containerd[1611]: time="2025-01-13T21:07:55.434931188Z" level=info msg="StartContainer for \"6fa81a5ff4ceab87f62c33f0be5b4db832385c98ed0a86eec3c32629d33f7f6d\"" Jan 13 21:07:55.508492 containerd[1611]: time="2025-01-13T21:07:55.508350499Z" level=info msg="StartContainer for \"6fa81a5ff4ceab87f62c33f0be5b4db832385c98ed0a86eec3c32629d33f7f6d\" returns successfully" Jan 13 21:07:55.794136 kubelet[2889]: I0113 21:07:55.794015 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-97lf7" podStartSLOduration=1.793939975 podStartE2EDuration="1.793939975s" podCreationTimestamp="2025-01-13 21:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:07:55.792921103 +0000 UTC m=+14.271947638" watchObservedRunningTime="2025-01-13 21:07:55.793939975 +0000 UTC m=+14.272966510" Jan 13 21:07:56.788932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount964165516.mount: Deactivated successfully. Jan 13 21:07:57.484186 containerd[1611]: time="2025-01-13T21:07:57.482825036Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:57.487072 containerd[1611]: time="2025-01-13T21:07:57.486056210Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907189" Jan 13 21:07:57.487515 containerd[1611]: time="2025-01-13T21:07:57.487469427Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:57.490241 containerd[1611]: time="2025-01-13T21:07:57.490199383Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.198021757s" Jan 13 21:07:57.490241 containerd[1611]: time="2025-01-13T21:07:57.490236335Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:07:57.492578 containerd[1611]: time="2025-01-13T21:07:57.491267737Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:07:57.495317 containerd[1611]: time="2025-01-13T21:07:57.495241139Z" level=info msg="CreateContainer within sandbox \"2320a23dc39acdee4e1f9085b4dd65725a42367fa2deebb6572f22df31a7a17d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:07:57.519100 containerd[1611]: time="2025-01-13T21:07:57.519027495Z" level=info msg="CreateContainer within sandbox \"2320a23dc39acdee4e1f9085b4dd65725a42367fa2deebb6572f22df31a7a17d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\"" Jan 13 21:07:57.520420 containerd[1611]: time="2025-01-13T21:07:57.520383059Z" level=info msg="StartContainer for \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\"" Jan 13 21:07:57.582214 containerd[1611]: time="2025-01-13T21:07:57.582168047Z" level=info msg="StartContainer for \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\" returns successfully" Jan 13 21:08:15.896912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount310953303.mount: Deactivated successfully. Jan 13 21:08:21.929664 containerd[1611]: time="2025-01-13T21:08:21.929600265Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:08:21.931202 containerd[1611]: time="2025-01-13T21:08:21.931166058Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734655" Jan 13 21:08:21.932640 containerd[1611]: time="2025-01-13T21:08:21.932592493Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:08:21.935028 containerd[1611]: time="2025-01-13T21:08:21.934999146Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 24.443682093s" Jan 13 21:08:21.935092 containerd[1611]: time="2025-01-13T21:08:21.935032911Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:08:21.937307 containerd[1611]: time="2025-01-13T21:08:21.937268936Z" level=info msg="CreateContainer within sandbox \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:08:21.959191 containerd[1611]: time="2025-01-13T21:08:21.959142238Z" level=info msg="CreateContainer within sandbox \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe\"" Jan 13 21:08:21.960180 containerd[1611]: time="2025-01-13T21:08:21.960154596Z" level=info msg="StartContainer for \"a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe\"" Jan 13 21:08:22.039466 containerd[1611]: time="2025-01-13T21:08:22.039424580Z" level=info msg="StartContainer for \"a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe\" returns successfully" Jan 13 21:08:22.922053 kubelet[2889]: I0113 21:08:22.921935 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-qtqtk" podStartSLOduration=26.722887989 podStartE2EDuration="28.921779475s" podCreationTimestamp="2025-01-13 21:07:54 +0000 UTC" firstStartedPulling="2025-01-13 21:07:55.291641818 +0000 UTC m=+13.770668363" lastFinishedPulling="2025-01-13 21:07:57.490533314 +0000 UTC m=+15.969559849" observedRunningTime="2025-01-13 21:07:57.966284027 +0000 UTC m=+16.445310612" watchObservedRunningTime="2025-01-13 21:08:22.921779475 +0000 UTC m=+41.400806070" Jan 13 21:08:22.938665 containerd[1611]: time="2025-01-13T21:08:22.938482047Z" level=info msg="shim disconnected" id=a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe namespace=k8s.io Jan 13 21:08:22.940132 containerd[1611]: time="2025-01-13T21:08:22.938663617Z" level=warning msg="cleaning up after shim disconnected" id=a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe namespace=k8s.io Jan 13 21:08:22.940132 containerd[1611]: time="2025-01-13T21:08:22.938725636Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:08:22.970163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe-rootfs.mount: Deactivated successfully. Jan 13 21:08:23.881030 containerd[1611]: time="2025-01-13T21:08:23.880881600Z" level=info msg="CreateContainer within sandbox \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:08:23.940217 containerd[1611]: time="2025-01-13T21:08:23.938692747Z" level=info msg="CreateContainer within sandbox \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2\"" Jan 13 21:08:23.949207 containerd[1611]: time="2025-01-13T21:08:23.948677027Z" level=info msg="StartContainer for \"2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2\"" Jan 13 21:08:24.024111 containerd[1611]: time="2025-01-13T21:08:24.023842715Z" level=info msg="StartContainer for \"2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2\" returns successfully" Jan 13 21:08:24.032898 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:08:24.033330 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:08:24.033397 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:08:24.044428 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:08:24.063145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:08:24.070564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2-rootfs.mount: Deactivated successfully. Jan 13 21:08:24.099542 containerd[1611]: time="2025-01-13T21:08:24.099478627Z" level=info msg="shim disconnected" id=2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2 namespace=k8s.io Jan 13 21:08:24.099801 containerd[1611]: time="2025-01-13T21:08:24.099676188Z" level=warning msg="cleaning up after shim disconnected" id=2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2 namespace=k8s.io Jan 13 21:08:24.099801 containerd[1611]: time="2025-01-13T21:08:24.099695275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:08:24.112036 containerd[1611]: time="2025-01-13T21:08:24.111922675Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:08:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:08:24.888633 containerd[1611]: time="2025-01-13T21:08:24.887732456Z" level=info msg="CreateContainer within sandbox \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:08:25.006741 containerd[1611]: time="2025-01-13T21:08:25.006632890Z" level=info msg="CreateContainer within sandbox \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804\"" Jan 13 21:08:25.009466 containerd[1611]: time="2025-01-13T21:08:25.008606375Z" level=info msg="StartContainer for \"ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804\"" Jan 13 21:08:25.119861 containerd[1611]: time="2025-01-13T21:08:25.119827438Z" level=info msg="StartContainer for \"ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804\" returns successfully" Jan 13 21:08:25.140925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804-rootfs.mount: Deactivated successfully. Jan 13 21:08:25.150491 containerd[1611]: time="2025-01-13T21:08:25.150287371Z" level=info msg="shim disconnected" id=ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804 namespace=k8s.io Jan 13 21:08:25.150491 containerd[1611]: time="2025-01-13T21:08:25.150455873Z" level=warning msg="cleaning up after shim disconnected" id=ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804 namespace=k8s.io Jan 13 21:08:25.150491 containerd[1611]: time="2025-01-13T21:08:25.150471332Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:08:25.892903 containerd[1611]: time="2025-01-13T21:08:25.892834695Z" level=info msg="CreateContainer within sandbox \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:08:25.958502 containerd[1611]: time="2025-01-13T21:08:25.957496168Z" level=info msg="CreateContainer within sandbox \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a\"" Jan 13 21:08:25.961084 containerd[1611]: time="2025-01-13T21:08:25.960075362Z" level=info msg="StartContainer for \"52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a\"" Jan 13 21:08:26.029035 containerd[1611]: time="2025-01-13T21:08:26.028628128Z" level=info msg="StartContainer for \"52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a\" returns successfully" Jan 13 21:08:26.050426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a-rootfs.mount: Deactivated successfully. Jan 13 21:08:26.059840 containerd[1611]: time="2025-01-13T21:08:26.059770826Z" level=info msg="shim disconnected" id=52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a namespace=k8s.io Jan 13 21:08:26.059840 containerd[1611]: time="2025-01-13T21:08:26.059832018Z" level=warning msg="cleaning up after shim disconnected" id=52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a namespace=k8s.io Jan 13 21:08:26.059840 containerd[1611]: time="2025-01-13T21:08:26.059843058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:08:26.906830 containerd[1611]: time="2025-01-13T21:08:26.906757576Z" level=info msg="CreateContainer within sandbox \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:08:26.980494 containerd[1611]: time="2025-01-13T21:08:26.980441410Z" level=info msg="CreateContainer within sandbox \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\"" Jan 13 21:08:26.981104 containerd[1611]: time="2025-01-13T21:08:26.981079153Z" level=info msg="StartContainer for \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\"" Jan 13 21:08:27.051510 containerd[1611]: time="2025-01-13T21:08:27.051335929Z" level=info msg="StartContainer for \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\" returns successfully" Jan 13 21:08:27.226426 kubelet[2889]: I0113 21:08:27.226204 2889 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:08:27.273892 kubelet[2889]: I0113 21:08:27.270472 2889 topology_manager.go:215] "Topology Admit Handler" podUID="36bf7a4d-e67a-4897-8714-6f2f1d39f10a" podNamespace="kube-system" podName="coredns-76f75df574-mv98r" Jan 13 21:08:27.293092 kubelet[2889]: I0113 21:08:27.291917 2889 topology_manager.go:215] "Topology Admit Handler" podUID="4da37201-039c-43e4-9730-02a3059aa83f" podNamespace="kube-system" podName="coredns-76f75df574-2m9bf" Jan 13 21:08:27.371499 kubelet[2889]: I0113 21:08:27.371303 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x79kg\" (UniqueName: \"kubernetes.io/projected/36bf7a4d-e67a-4897-8714-6f2f1d39f10a-kube-api-access-x79kg\") pod \"coredns-76f75df574-mv98r\" (UID: \"36bf7a4d-e67a-4897-8714-6f2f1d39f10a\") " pod="kube-system/coredns-76f75df574-mv98r" Jan 13 21:08:27.371499 kubelet[2889]: I0113 21:08:27.371360 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36bf7a4d-e67a-4897-8714-6f2f1d39f10a-config-volume\") pod \"coredns-76f75df574-mv98r\" (UID: \"36bf7a4d-e67a-4897-8714-6f2f1d39f10a\") " pod="kube-system/coredns-76f75df574-mv98r" Jan 13 21:08:27.473732 kubelet[2889]: I0113 21:08:27.471946 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6q7r\" (UniqueName: \"kubernetes.io/projected/4da37201-039c-43e4-9730-02a3059aa83f-kube-api-access-g6q7r\") pod \"coredns-76f75df574-2m9bf\" (UID: \"4da37201-039c-43e4-9730-02a3059aa83f\") " pod="kube-system/coredns-76f75df574-2m9bf" Jan 13 21:08:27.473732 kubelet[2889]: I0113 21:08:27.472014 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4da37201-039c-43e4-9730-02a3059aa83f-config-volume\") pod \"coredns-76f75df574-2m9bf\" (UID: \"4da37201-039c-43e4-9730-02a3059aa83f\") " pod="kube-system/coredns-76f75df574-2m9bf" Jan 13 21:08:27.604207 containerd[1611]: time="2025-01-13T21:08:27.603867166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mv98r,Uid:36bf7a4d-e67a-4897-8714-6f2f1d39f10a,Namespace:kube-system,Attempt:0,}" Jan 13 21:08:27.610331 containerd[1611]: time="2025-01-13T21:08:27.610268547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2m9bf,Uid:4da37201-039c-43e4-9730-02a3059aa83f,Namespace:kube-system,Attempt:0,}" Jan 13 21:08:27.961100 kubelet[2889]: I0113 21:08:27.947483 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-sc7bb" podStartSLOduration=7.426398321 podStartE2EDuration="33.947393424s" podCreationTimestamp="2025-01-13 21:07:54 +0000 UTC" firstStartedPulling="2025-01-13 21:07:55.414227573 +0000 UTC m=+13.893254108" lastFinishedPulling="2025-01-13 21:08:21.935222666 +0000 UTC m=+40.414249211" observedRunningTime="2025-01-13 21:08:27.945851197 +0000 UTC m=+46.424877823" watchObservedRunningTime="2025-01-13 21:08:27.947393424 +0000 UTC m=+46.426420009" Jan 13 21:08:29.160915 systemd-networkd[1209]: cilium_host: Link UP Jan 13 21:08:29.162767 systemd-networkd[1209]: cilium_net: Link UP Jan 13 21:08:29.166532 systemd-networkd[1209]: cilium_net: Gained carrier Jan 13 21:08:29.166777 systemd-networkd[1209]: cilium_host: Gained carrier Jan 13 21:08:29.166898 systemd-networkd[1209]: cilium_net: Gained IPv6LL Jan 13 21:08:29.167079 systemd-networkd[1209]: cilium_host: Gained IPv6LL Jan 13 21:08:29.258932 systemd-networkd[1209]: cilium_vxlan: Link UP Jan 13 21:08:29.258940 systemd-networkd[1209]: cilium_vxlan: Gained carrier Jan 13 21:08:29.594086 kernel: NET: Registered PF_ALG protocol family Jan 13 21:08:30.300024 systemd-networkd[1209]: lxc_health: Link UP Jan 13 21:08:30.306130 systemd-networkd[1209]: lxc_health: Gained carrier Jan 13 21:08:30.424179 systemd-networkd[1209]: cilium_vxlan: Gained IPv6LL Jan 13 21:08:30.671781 systemd-networkd[1209]: lxc9058ce118beb: Link UP Jan 13 21:08:30.676023 kernel: eth0: renamed from tmp38518 Jan 13 21:08:30.690540 systemd-networkd[1209]: lxc9058ce118beb: Gained carrier Jan 13 21:08:30.729025 kernel: eth0: renamed from tmp5a1a3 Jan 13 21:08:30.744105 systemd-networkd[1209]: lxc6b26a4a65f07: Link UP Jan 13 21:08:30.752288 systemd-networkd[1209]: lxc6b26a4a65f07: Gained carrier Jan 13 21:08:31.384171 systemd-networkd[1209]: lxc_health: Gained IPv6LL Jan 13 21:08:32.152649 systemd-networkd[1209]: lxc6b26a4a65f07: Gained IPv6LL Jan 13 21:08:32.664214 systemd-networkd[1209]: lxc9058ce118beb: Gained IPv6LL Jan 13 21:08:35.109397 containerd[1611]: time="2025-01-13T21:08:35.109315389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:08:35.116227 containerd[1611]: time="2025-01-13T21:08:35.109390817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:08:35.116227 containerd[1611]: time="2025-01-13T21:08:35.109417457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:08:35.116227 containerd[1611]: time="2025-01-13T21:08:35.109532359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:08:35.215378 containerd[1611]: time="2025-01-13T21:08:35.215326743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mv98r,Uid:36bf7a4d-e67a-4897-8714-6f2f1d39f10a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3851884d59f6a7363859e8f1b8da8f61ec5adabf29fba0a5a91162bbce625bf0\"" Jan 13 21:08:35.221036 containerd[1611]: time="2025-01-13T21:08:35.220564685Z" level=info msg="CreateContainer within sandbox \"3851884d59f6a7363859e8f1b8da8f61ec5adabf29fba0a5a91162bbce625bf0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:08:35.248827 containerd[1611]: time="2025-01-13T21:08:35.247903432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:08:35.248827 containerd[1611]: time="2025-01-13T21:08:35.248018225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:08:35.248827 containerd[1611]: time="2025-01-13T21:08:35.248041768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:08:35.248827 containerd[1611]: time="2025-01-13T21:08:35.248145810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:08:35.310590 containerd[1611]: time="2025-01-13T21:08:35.309919726Z" level=info msg="CreateContainer within sandbox \"3851884d59f6a7363859e8f1b8da8f61ec5adabf29fba0a5a91162bbce625bf0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8823f08b178b419d1d3d8b6fbc59c9ec3320179d9556e09886721513ba157462\"" Jan 13 21:08:35.313296 containerd[1611]: time="2025-01-13T21:08:35.313263593Z" level=info msg="StartContainer for \"8823f08b178b419d1d3d8b6fbc59c9ec3320179d9556e09886721513ba157462\"" Jan 13 21:08:35.368809 containerd[1611]: time="2025-01-13T21:08:35.368652182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2m9bf,Uid:4da37201-039c-43e4-9730-02a3059aa83f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a1a37f69c63f17ae3bc05f817e9d58fd65bdff812916c63e0705f128260f63c\"" Jan 13 21:08:35.376344 containerd[1611]: time="2025-01-13T21:08:35.376066227Z" level=info msg="CreateContainer within sandbox \"5a1a37f69c63f17ae3bc05f817e9d58fd65bdff812916c63e0705f128260f63c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:08:35.403024 containerd[1611]: time="2025-01-13T21:08:35.402595181Z" level=info msg="StartContainer for \"8823f08b178b419d1d3d8b6fbc59c9ec3320179d9556e09886721513ba157462\" returns successfully" Jan 13 21:08:35.420660 containerd[1611]: time="2025-01-13T21:08:35.420598128Z" level=info msg="CreateContainer within sandbox \"5a1a37f69c63f17ae3bc05f817e9d58fd65bdff812916c63e0705f128260f63c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00d03818f65558859df4f0fa694aff10c0ccacac9ee887b4dfc1e69b98e58187\"" Jan 13 21:08:35.421709 containerd[1611]: time="2025-01-13T21:08:35.421684132Z" level=info msg="StartContainer for \"00d03818f65558859df4f0fa694aff10c0ccacac9ee887b4dfc1e69b98e58187\"" Jan 13 21:08:35.498430 containerd[1611]: time="2025-01-13T21:08:35.498375064Z" level=info msg="StartContainer for \"00d03818f65558859df4f0fa694aff10c0ccacac9ee887b4dfc1e69b98e58187\" returns successfully" Jan 13 21:08:36.019860 kubelet[2889]: I0113 21:08:36.018314 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2m9bf" podStartSLOduration=42.018209374 podStartE2EDuration="42.018209374s" podCreationTimestamp="2025-01-13 21:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:08:35.973958787 +0000 UTC m=+54.452985382" watchObservedRunningTime="2025-01-13 21:08:36.018209374 +0000 UTC m=+54.497235959" Jan 13 21:08:36.121329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2924699712.mount: Deactivated successfully. Jan 13 21:09:10.823627 systemd[1]: Started sshd@7-172.24.4.134:22-172.24.4.1:42588.service - OpenSSH per-connection server daemon (172.24.4.1:42588). Jan 13 21:09:11.954528 sshd[4246]: Accepted publickey for core from 172.24.4.1 port 42588 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:09:11.957444 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:09:11.968375 systemd-logind[1590]: New session 10 of user core. Jan 13 21:09:11.985207 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:09:12.711036 sshd[4249]: Connection closed by 172.24.4.1 port 42588 Jan 13 21:09:12.712138 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Jan 13 21:09:12.718559 systemd[1]: sshd@7-172.24.4.134:22-172.24.4.1:42588.service: Deactivated successfully. Jan 13 21:09:12.725545 systemd-logind[1590]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:09:12.726124 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:09:12.730212 systemd-logind[1590]: Removed session 10. Jan 13 21:09:17.726926 systemd[1]: Started sshd@8-172.24.4.134:22-172.24.4.1:49864.service - OpenSSH per-connection server daemon (172.24.4.1:49864). Jan 13 21:09:19.120391 sshd[4262]: Accepted publickey for core from 172.24.4.1 port 49864 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:09:19.123673 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:09:19.133292 systemd-logind[1590]: New session 11 of user core. Jan 13 21:09:19.141709 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:09:19.812071 sshd[4265]: Connection closed by 172.24.4.1 port 49864 Jan 13 21:09:19.811886 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Jan 13 21:09:19.817676 systemd[1]: sshd@8-172.24.4.134:22-172.24.4.1:49864.service: Deactivated successfully. Jan 13 21:09:19.825560 systemd-logind[1590]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:09:19.826614 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:09:19.831604 systemd-logind[1590]: Removed session 11. Jan 13 21:09:24.823570 systemd[1]: Started sshd@9-172.24.4.134:22-172.24.4.1:35632.service - OpenSSH per-connection server daemon (172.24.4.1:35632). Jan 13 21:09:26.255699 sshd[4276]: Accepted publickey for core from 172.24.4.1 port 35632 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:09:26.258142 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:09:26.267578 systemd-logind[1590]: New session 12 of user core. Jan 13 21:09:26.271448 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:09:27.195588 sshd[4281]: Connection closed by 172.24.4.1 port 35632 Jan 13 21:09:27.195262 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Jan 13 21:09:27.219058 systemd[1]: Started sshd@10-172.24.4.134:22-172.24.4.1:35638.service - OpenSSH per-connection server daemon (172.24.4.1:35638). Jan 13 21:09:27.222782 systemd[1]: sshd@9-172.24.4.134:22-172.24.4.1:35632.service: Deactivated successfully. Jan 13 21:09:27.242587 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:09:27.250914 systemd-logind[1590]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:09:27.259618 systemd-logind[1590]: Removed session 12. Jan 13 21:09:28.549271 sshd[4290]: Accepted publickey for core from 172.24.4.1 port 35638 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:09:28.551831 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:09:28.561575 systemd-logind[1590]: New session 13 of user core. Jan 13 21:09:28.573677 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:09:29.437049 sshd[4295]: Connection closed by 172.24.4.1 port 35638 Jan 13 21:09:29.438476 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Jan 13 21:09:29.454250 systemd[1]: Started sshd@11-172.24.4.134:22-172.24.4.1:35652.service - OpenSSH per-connection server daemon (172.24.4.1:35652). Jan 13 21:09:29.458983 systemd[1]: sshd@10-172.24.4.134:22-172.24.4.1:35638.service: Deactivated successfully. Jan 13 21:09:29.472469 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:09:29.477223 systemd-logind[1590]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:09:29.483914 systemd-logind[1590]: Removed session 13. Jan 13 21:09:30.685887 sshd[4301]: Accepted publickey for core from 172.24.4.1 port 35652 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:09:30.689184 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:09:30.701175 systemd-logind[1590]: New session 14 of user core. Jan 13 21:09:30.709547 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:09:31.508159 sshd[4307]: Connection closed by 172.24.4.1 port 35652 Jan 13 21:09:31.509805 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Jan 13 21:09:31.519291 systemd-logind[1590]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:09:31.521794 systemd[1]: sshd@11-172.24.4.134:22-172.24.4.1:35652.service: Deactivated successfully. Jan 13 21:09:31.529518 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:09:31.532966 systemd-logind[1590]: Removed session 14. Jan 13 21:09:36.520924 systemd[1]: Started sshd@12-172.24.4.134:22-172.24.4.1:59544.service - OpenSSH per-connection server daemon (172.24.4.1:59544). Jan 13 21:09:37.906672 sshd[4318]: Accepted publickey for core from 172.24.4.1 port 59544 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:09:37.909186 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:09:37.919699 systemd-logind[1590]: New session 15 of user core. Jan 13 21:09:37.925282 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:09:38.534192 sshd[4321]: Connection closed by 172.24.4.1 port 59544 Jan 13 21:09:38.535515 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Jan 13 21:09:38.542145 systemd[1]: sshd@12-172.24.4.134:22-172.24.4.1:59544.service: Deactivated successfully. Jan 13 21:09:38.551158 systemd-logind[1590]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:09:38.551187 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:09:38.555724 systemd-logind[1590]: Removed session 15. Jan 13 21:09:43.547761 systemd[1]: Started sshd@13-172.24.4.134:22-172.24.4.1:47518.service - OpenSSH per-connection server daemon (172.24.4.1:47518). Jan 13 21:09:44.741865 sshd[4334]: Accepted publickey for core from 172.24.4.1 port 47518 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:09:44.744508 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:09:44.754160 systemd-logind[1590]: New session 16 of user core. Jan 13 21:09:44.762599 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:09:45.480090 sshd[4337]: Connection closed by 172.24.4.1 port 47518 Jan 13 21:09:45.482294 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Jan 13 21:09:45.489783 systemd[1]: Started sshd@14-172.24.4.134:22-172.24.4.1:47522.service - OpenSSH per-connection server daemon (172.24.4.1:47522). Jan 13 21:09:45.492284 systemd[1]: sshd@13-172.24.4.134:22-172.24.4.1:47518.service: Deactivated successfully. Jan 13 21:09:45.499643 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:09:45.504720 systemd-logind[1590]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:09:45.509635 systemd-logind[1590]: Removed session 16. Jan 13 21:09:46.862479 sshd[4345]: Accepted publickey for core from 172.24.4.1 port 47522 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:09:46.865662 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:09:46.879794 systemd-logind[1590]: New session 17 of user core. Jan 13 21:09:46.890666 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:09:47.830781 sshd[4351]: Connection closed by 172.24.4.1 port 47522 Jan 13 21:09:47.831888 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Jan 13 21:09:47.846527 systemd[1]: Started sshd@15-172.24.4.134:22-172.24.4.1:47524.service - OpenSSH per-connection server daemon (172.24.4.1:47524). Jan 13 21:09:47.851738 systemd[1]: sshd@14-172.24.4.134:22-172.24.4.1:47522.service: Deactivated successfully. Jan 13 21:09:47.862583 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:09:47.866748 systemd-logind[1590]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:09:47.871581 systemd-logind[1590]: Removed session 17. Jan 13 21:09:49.702224 sshd[4357]: Accepted publickey for core from 172.24.4.1 port 47524 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:09:49.705151 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:09:49.716491 systemd-logind[1590]: New session 18 of user core. Jan 13 21:09:49.724494 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:09:52.397302 sshd[4363]: Connection closed by 172.24.4.1 port 47524 Jan 13 21:09:52.394648 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Jan 13 21:09:52.411728 systemd[1]: Started sshd@16-172.24.4.134:22-172.24.4.1:47534.service - OpenSSH per-connection server daemon (172.24.4.1:47534). Jan 13 21:09:52.416456 systemd[1]: sshd@15-172.24.4.134:22-172.24.4.1:47524.service: Deactivated successfully. Jan 13 21:09:52.426611 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:09:52.434609 systemd-logind[1590]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:09:52.442153 systemd-logind[1590]: Removed session 18. Jan 13 21:09:53.958867 sshd[4376]: Accepted publickey for core from 172.24.4.1 port 47534 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:09:53.961475 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:09:53.972115 systemd-logind[1590]: New session 19 of user core. Jan 13 21:09:53.980543 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:09:55.228699 sshd[4382]: Connection closed by 172.24.4.1 port 47534 Jan 13 21:09:55.228490 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Jan 13 21:09:55.240235 systemd[1]: Started sshd@17-172.24.4.134:22-172.24.4.1:44466.service - OpenSSH per-connection server daemon (172.24.4.1:44466). Jan 13 21:09:55.243217 systemd[1]: sshd@16-172.24.4.134:22-172.24.4.1:47534.service: Deactivated successfully. Jan 13 21:09:55.249190 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:09:55.253308 systemd-logind[1590]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:09:55.261759 systemd-logind[1590]: Removed session 19. Jan 13 21:09:56.647793 sshd[4387]: Accepted publickey for core from 172.24.4.1 port 44466 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:09:56.650620 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:09:56.660919 systemd-logind[1590]: New session 20 of user core. Jan 13 21:09:56.670523 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:09:57.429810 sshd[4395]: Connection closed by 172.24.4.1 port 44466 Jan 13 21:09:57.430592 sshd-session[4387]: pam_unix(sshd:session): session closed for user core Jan 13 21:09:57.438745 systemd[1]: sshd@17-172.24.4.134:22-172.24.4.1:44466.service: Deactivated successfully. Jan 13 21:09:57.445377 systemd-logind[1590]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:09:57.445663 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:09:57.449710 systemd-logind[1590]: Removed session 20. Jan 13 21:10:02.444722 systemd[1]: Started sshd@18-172.24.4.134:22-172.24.4.1:44470.service - OpenSSH per-connection server daemon (172.24.4.1:44470). Jan 13 21:10:03.946635 sshd[4409]: Accepted publickey for core from 172.24.4.1 port 44470 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:10:03.948073 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:03.959789 systemd-logind[1590]: New session 21 of user core. Jan 13 21:10:03.967244 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:10:04.737796 sshd[4412]: Connection closed by 172.24.4.1 port 44470 Jan 13 21:10:04.738887 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:04.744385 systemd[1]: sshd@18-172.24.4.134:22-172.24.4.1:44470.service: Deactivated successfully. Jan 13 21:10:04.752095 systemd-logind[1590]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:10:04.752446 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:10:04.756390 systemd-logind[1590]: Removed session 21. Jan 13 21:10:09.754662 systemd[1]: Started sshd@19-172.24.4.134:22-172.24.4.1:33910.service - OpenSSH per-connection server daemon (172.24.4.1:33910). Jan 13 21:10:11.174176 sshd[4423]: Accepted publickey for core from 172.24.4.1 port 33910 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:10:11.176971 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:11.189217 systemd-logind[1590]: New session 22 of user core. Jan 13 21:10:11.195536 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:10:11.941677 sshd[4426]: Connection closed by 172.24.4.1 port 33910 Jan 13 21:10:11.943748 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:11.959524 systemd[1]: Started sshd@20-172.24.4.134:22-172.24.4.1:33912.service - OpenSSH per-connection server daemon (172.24.4.1:33912). Jan 13 21:10:11.963170 systemd[1]: sshd@19-172.24.4.134:22-172.24.4.1:33910.service: Deactivated successfully. Jan 13 21:10:11.971552 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:10:11.975974 systemd-logind[1590]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:10:11.980256 systemd-logind[1590]: Removed session 22. Jan 13 21:10:13.470370 sshd[4434]: Accepted publickey for core from 172.24.4.1 port 33912 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:10:13.473309 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:13.483561 systemd-logind[1590]: New session 23 of user core. Jan 13 21:10:13.494525 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:10:15.561664 kubelet[2889]: I0113 21:10:15.561599 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mv98r" podStartSLOduration=141.561554928 podStartE2EDuration="2m21.561554928s" podCreationTimestamp="2025-01-13 21:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:08:36.023070931 +0000 UTC m=+54.502097516" watchObservedRunningTime="2025-01-13 21:10:15.561554928 +0000 UTC m=+154.040581463" Jan 13 21:10:15.589940 containerd[1611]: time="2025-01-13T21:10:15.588422494Z" level=info msg="StopContainer for \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\" with timeout 30 (s)" Jan 13 21:10:15.589940 containerd[1611]: time="2025-01-13T21:10:15.588873964Z" level=info msg="Stop container \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\" with signal terminated" Jan 13 21:10:15.606222 containerd[1611]: time="2025-01-13T21:10:15.606180617Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:10:15.616367 containerd[1611]: time="2025-01-13T21:10:15.616313758Z" level=info msg="StopContainer for \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\" with timeout 2 (s)" Jan 13 21:10:15.617835 containerd[1611]: time="2025-01-13T21:10:15.617813816Z" level=info msg="Stop container \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\" with signal terminated" Jan 13 21:10:15.627093 systemd-networkd[1209]: lxc_health: Link DOWN Jan 13 21:10:15.627439 systemd-networkd[1209]: lxc_health: Lost carrier Jan 13 21:10:15.648840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427-rootfs.mount: Deactivated successfully. Jan 13 21:10:15.663625 containerd[1611]: time="2025-01-13T21:10:15.663411036Z" level=info msg="shim disconnected" id=2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427 namespace=k8s.io Jan 13 21:10:15.663625 containerd[1611]: time="2025-01-13T21:10:15.663502530Z" level=warning msg="cleaning up after shim disconnected" id=2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427 namespace=k8s.io Jan 13 21:10:15.663625 containerd[1611]: time="2025-01-13T21:10:15.663512770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:10:15.673340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85-rootfs.mount: Deactivated successfully. Jan 13 21:10:15.683526 containerd[1611]: time="2025-01-13T21:10:15.683338401Z" level=info msg="shim disconnected" id=eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85 namespace=k8s.io Jan 13 21:10:15.683526 containerd[1611]: time="2025-01-13T21:10:15.683397354Z" level=warning msg="cleaning up after shim disconnected" id=eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85 namespace=k8s.io Jan 13 21:10:15.683526 containerd[1611]: time="2025-01-13T21:10:15.683406481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:10:15.697030 containerd[1611]: time="2025-01-13T21:10:15.695498122Z" level=info msg="StopContainer for \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\" returns successfully" Jan 13 21:10:15.698600 containerd[1611]: time="2025-01-13T21:10:15.698565545Z" level=info msg="StopPodSandbox for \"2320a23dc39acdee4e1f9085b4dd65725a42367fa2deebb6572f22df31a7a17d\"" Jan 13 21:10:15.698850 containerd[1611]: time="2025-01-13T21:10:15.698766308Z" level=info msg="Container to stop \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:10:15.704387 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2320a23dc39acdee4e1f9085b4dd65725a42367fa2deebb6572f22df31a7a17d-shm.mount: Deactivated successfully. Jan 13 21:10:15.721497 containerd[1611]: time="2025-01-13T21:10:15.721446116Z" level=info msg="StopContainer for \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\" returns successfully" Jan 13 21:10:15.722007 containerd[1611]: time="2025-01-13T21:10:15.721958713Z" level=info msg="StopPodSandbox for \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\"" Jan 13 21:10:15.722106 containerd[1611]: time="2025-01-13T21:10:15.722035809Z" level=info msg="Container to stop \"52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:10:15.722106 containerd[1611]: time="2025-01-13T21:10:15.722097477Z" level=info msg="Container to stop \"a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:10:15.722171 containerd[1611]: time="2025-01-13T21:10:15.722109961Z" level=info msg="Container to stop \"2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:10:15.722171 containerd[1611]: time="2025-01-13T21:10:15.722121072Z" level=info msg="Container to stop \"ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:10:15.722171 containerd[1611]: time="2025-01-13T21:10:15.722132544Z" level=info msg="Container to stop \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:10:15.726582 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5-shm.mount: Deactivated successfully. Jan 13 21:10:15.770727 containerd[1611]: time="2025-01-13T21:10:15.770512064Z" level=info msg="shim disconnected" id=83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5 namespace=k8s.io Jan 13 21:10:15.770727 containerd[1611]: time="2025-01-13T21:10:15.770658623Z" level=warning msg="cleaning up after shim disconnected" id=83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5 namespace=k8s.io Jan 13 21:10:15.770727 containerd[1611]: time="2025-01-13T21:10:15.770669193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:10:15.770940 containerd[1611]: time="2025-01-13T21:10:15.770808718Z" level=info msg="shim disconnected" id=2320a23dc39acdee4e1f9085b4dd65725a42367fa2deebb6572f22df31a7a17d namespace=k8s.io Jan 13 21:10:15.770940 containerd[1611]: time="2025-01-13T21:10:15.770887960Z" level=warning msg="cleaning up after shim disconnected" id=2320a23dc39acdee4e1f9085b4dd65725a42367fa2deebb6572f22df31a7a17d namespace=k8s.io Jan 13 21:10:15.771027 containerd[1611]: time="2025-01-13T21:10:15.770941441Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:10:15.792748 containerd[1611]: time="2025-01-13T21:10:15.792562263Z" level=info msg="TearDown network for sandbox \"2320a23dc39acdee4e1f9085b4dd65725a42367fa2deebb6572f22df31a7a17d\" successfully" Jan 13 21:10:15.792748 containerd[1611]: time="2025-01-13T21:10:15.792598171Z" level=info msg="StopPodSandbox for \"2320a23dc39acdee4e1f9085b4dd65725a42367fa2deebb6572f22df31a7a17d\" returns successfully" Jan 13 21:10:15.795172 containerd[1611]: time="2025-01-13T21:10:15.795045273Z" level=info msg="TearDown network for sandbox \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\" successfully" Jan 13 21:10:15.795172 containerd[1611]: time="2025-01-13T21:10:15.795168237Z" level=info msg="StopPodSandbox for \"83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5\" returns successfully" Jan 13 21:10:15.886679 kubelet[2889]: I0113 21:10:15.886651 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-lib-modules\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.887203 kubelet[2889]: I0113 21:10:15.886694 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-cilium-run\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.887203 kubelet[2889]: I0113 21:10:15.886718 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-cni-path\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.887203 kubelet[2889]: I0113 21:10:15.886749 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d12c9873-1380-4664-9675-5537e6d7cf4c-cilium-config-path\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.887203 kubelet[2889]: I0113 21:10:15.886779 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bdb39c5-f754-4d1b-b42e-65e561fdc465-cilium-config-path\") pod \"0bdb39c5-f754-4d1b-b42e-65e561fdc465\" (UID: \"0bdb39c5-f754-4d1b-b42e-65e561fdc465\") " Jan 13 21:10:15.887203 kubelet[2889]: I0113 21:10:15.886803 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-host-proc-sys-net\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.890714 kubelet[2889]: I0113 21:10:15.890693 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d12c9873-1380-4664-9675-5537e6d7cf4c-clustermesh-secrets\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.890782 kubelet[2889]: I0113 21:10:15.890729 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-cilium-cgroup\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.890782 kubelet[2889]: I0113 21:10:15.890757 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssq9m\" (UniqueName: \"kubernetes.io/projected/d12c9873-1380-4664-9675-5537e6d7cf4c-kube-api-access-ssq9m\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.890782 kubelet[2889]: I0113 21:10:15.890779 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-bpf-maps\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.890923 kubelet[2889]: I0113 21:10:15.890800 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-hostproc\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.890923 kubelet[2889]: I0113 21:10:15.890824 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d12c9873-1380-4664-9675-5537e6d7cf4c-hubble-tls\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.890923 kubelet[2889]: I0113 21:10:15.890845 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-etc-cni-netd\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.890923 kubelet[2889]: I0113 21:10:15.890869 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rs2v\" (UniqueName: \"kubernetes.io/projected/0bdb39c5-f754-4d1b-b42e-65e561fdc465-kube-api-access-2rs2v\") pod \"0bdb39c5-f754-4d1b-b42e-65e561fdc465\" (UID: \"0bdb39c5-f754-4d1b-b42e-65e561fdc465\") " Jan 13 21:10:15.890923 kubelet[2889]: I0113 21:10:15.890893 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-host-proc-sys-kernel\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.890923 kubelet[2889]: I0113 21:10:15.890914 2889 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-xtables-lock\") pod \"d12c9873-1380-4664-9675-5537e6d7cf4c\" (UID: \"d12c9873-1380-4664-9675-5537e6d7cf4c\") " Jan 13 21:10:15.891110 kubelet[2889]: I0113 21:10:15.890957 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:10:15.891110 kubelet[2889]: I0113 21:10:15.886947 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-cni-path" (OuterVolumeSpecName: "cni-path") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:10:15.891110 kubelet[2889]: I0113 21:10:15.886967 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:10:15.891110 kubelet[2889]: I0113 21:10:15.886980 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:10:15.891110 kubelet[2889]: I0113 21:10:15.888618 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:10:15.892359 kubelet[2889]: I0113 21:10:15.891876 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:10:15.892607 kubelet[2889]: I0113 21:10:15.892566 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:10:15.893264 kubelet[2889]: I0113 21:10:15.893217 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:10:15.895335 kubelet[2889]: I0113 21:10:15.894645 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d12c9873-1380-4664-9675-5537e6d7cf4c-kube-api-access-ssq9m" (OuterVolumeSpecName: "kube-api-access-ssq9m") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "kube-api-access-ssq9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:10:15.895335 kubelet[2889]: I0113 21:10:15.894834 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdb39c5-f754-4d1b-b42e-65e561fdc465-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0bdb39c5-f754-4d1b-b42e-65e561fdc465" (UID: "0bdb39c5-f754-4d1b-b42e-65e561fdc465"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:10:15.895335 kubelet[2889]: I0113 21:10:15.895197 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d12c9873-1380-4664-9675-5537e6d7cf4c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:10:15.895335 kubelet[2889]: I0113 21:10:15.895230 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:10:15.895335 kubelet[2889]: I0113 21:10:15.895252 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-hostproc" (OuterVolumeSpecName: "hostproc") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:10:15.896593 kubelet[2889]: I0113 21:10:15.896563 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bdb39c5-f754-4d1b-b42e-65e561fdc465-kube-api-access-2rs2v" (OuterVolumeSpecName: "kube-api-access-2rs2v") pod "0bdb39c5-f754-4d1b-b42e-65e561fdc465" (UID: "0bdb39c5-f754-4d1b-b42e-65e561fdc465"). InnerVolumeSpecName "kube-api-access-2rs2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:10:15.896654 kubelet[2889]: I0113 21:10:15.896628 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12c9873-1380-4664-9675-5537e6d7cf4c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:10:15.897312 kubelet[2889]: I0113 21:10:15.897266 2889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d12c9873-1380-4664-9675-5537e6d7cf4c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d12c9873-1380-4664-9675-5537e6d7cf4c" (UID: "d12c9873-1380-4664-9675-5537e6d7cf4c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:10:15.991615 kubelet[2889]: I0113 21:10:15.991475 2889 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ssq9m\" (UniqueName: \"kubernetes.io/projected/d12c9873-1380-4664-9675-5537e6d7cf4c-kube-api-access-ssq9m\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.991615 kubelet[2889]: I0113 21:10:15.991534 2889 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-bpf-maps\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.991615 kubelet[2889]: I0113 21:10:15.991566 2889 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-hostproc\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.991615 kubelet[2889]: I0113 21:10:15.991598 2889 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d12c9873-1380-4664-9675-5537e6d7cf4c-hubble-tls\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.991615 kubelet[2889]: I0113 21:10:15.991626 2889 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-etc-cni-netd\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.991615 kubelet[2889]: I0113 21:10:15.991657 2889 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-xtables-lock\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.992320 kubelet[2889]: I0113 21:10:15.991690 2889 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2rs2v\" (UniqueName: \"kubernetes.io/projected/0bdb39c5-f754-4d1b-b42e-65e561fdc465-kube-api-access-2rs2v\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.992320 kubelet[2889]: I0113 21:10:15.991722 2889 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-host-proc-sys-kernel\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.992320 kubelet[2889]: I0113 21:10:15.991754 2889 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-cilium-run\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.992320 kubelet[2889]: I0113 21:10:15.991784 2889 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-cni-path\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.992320 kubelet[2889]: I0113 21:10:15.991815 2889 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d12c9873-1380-4664-9675-5537e6d7cf4c-cilium-config-path\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.992320 kubelet[2889]: I0113 21:10:15.991842 2889 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-lib-modules\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.992320 kubelet[2889]: I0113 21:10:15.991872 2889 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bdb39c5-f754-4d1b-b42e-65e561fdc465-cilium-config-path\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.992807 kubelet[2889]: I0113 21:10:15.991905 2889 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-host-proc-sys-net\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.992807 kubelet[2889]: I0113 21:10:15.991935 2889 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d12c9873-1380-4664-9675-5537e6d7cf4c-clustermesh-secrets\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:15.992807 kubelet[2889]: I0113 21:10:15.991963 2889 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d12c9873-1380-4664-9675-5537e6d7cf4c-cilium-cgroup\") on node \"ci-4152-2-0-e-56a5643f90.novalocal\" DevicePath \"\"" Jan 13 21:10:16.285225 kubelet[2889]: I0113 21:10:16.283348 2889 scope.go:117] "RemoveContainer" containerID="2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427" Jan 13 21:10:16.294571 containerd[1611]: time="2025-01-13T21:10:16.293317307Z" level=info msg="RemoveContainer for \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\"" Jan 13 21:10:16.492133 containerd[1611]: time="2025-01-13T21:10:16.492064872Z" level=info msg="RemoveContainer for \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\" returns successfully" Jan 13 21:10:16.493141 kubelet[2889]: I0113 21:10:16.493089 2889 scope.go:117] "RemoveContainer" containerID="2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427" Jan 13 21:10:16.493854 containerd[1611]: time="2025-01-13T21:10:16.493753208Z" level=error msg="ContainerStatus for \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\": not found" Jan 13 21:10:16.494488 kubelet[2889]: E0113 21:10:16.494446 2889 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\": not found" containerID="2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427" Jan 13 21:10:16.494794 kubelet[2889]: I0113 21:10:16.494629 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427"} err="failed to get container status \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d7185a2914c45adeb4f44de8adb3f540784298ebf28e58057bfd6e960744427\": not found" Jan 13 21:10:16.494794 kubelet[2889]: I0113 21:10:16.494658 2889 scope.go:117] "RemoveContainer" containerID="eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85" Jan 13 21:10:16.497830 containerd[1611]: time="2025-01-13T21:10:16.497703502Z" level=info msg="RemoveContainer for \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\"" Jan 13 21:10:16.554616 containerd[1611]: time="2025-01-13T21:10:16.553565378Z" level=info msg="RemoveContainer for \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\" returns successfully" Jan 13 21:10:16.555089 kubelet[2889]: I0113 21:10:16.554299 2889 scope.go:117] "RemoveContainer" containerID="52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a" Jan 13 21:10:16.558533 containerd[1611]: time="2025-01-13T21:10:16.558434363Z" level=info msg="RemoveContainer for \"52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a\"" Jan 13 21:10:16.595595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83934eedc2c694295f7c80127a567d256b925b890cee60f26558283881971ea5-rootfs.mount: Deactivated successfully. Jan 13 21:10:16.595934 systemd[1]: var-lib-kubelet-pods-d12c9873\x2d1380\x2d4664\x2d9675\x2d5537e6d7cf4c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:10:16.596266 systemd[1]: var-lib-kubelet-pods-d12c9873\x2d1380\x2d4664\x2d9675\x2d5537e6d7cf4c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dssq9m.mount: Deactivated successfully. Jan 13 21:10:16.596525 systemd[1]: var-lib-kubelet-pods-d12c9873\x2d1380\x2d4664\x2d9675\x2d5537e6d7cf4c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:10:16.596766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2320a23dc39acdee4e1f9085b4dd65725a42367fa2deebb6572f22df31a7a17d-rootfs.mount: Deactivated successfully. Jan 13 21:10:16.597255 systemd[1]: var-lib-kubelet-pods-0bdb39c5\x2df754\x2d4d1b\x2db42e\x2d65e561fdc465-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2rs2v.mount: Deactivated successfully. Jan 13 21:10:16.605746 containerd[1611]: time="2025-01-13T21:10:16.605428407Z" level=info msg="RemoveContainer for \"52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a\" returns successfully" Jan 13 21:10:16.609504 containerd[1611]: time="2025-01-13T21:10:16.608923865Z" level=info msg="RemoveContainer for \"ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804\"" Jan 13 21:10:16.609924 kubelet[2889]: I0113 21:10:16.606474 2889 scope.go:117] "RemoveContainer" containerID="ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804" Jan 13 21:10:16.615330 containerd[1611]: time="2025-01-13T21:10:16.615252370Z" level=info msg="RemoveContainer for \"ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804\" returns successfully" Jan 13 21:10:16.615741 kubelet[2889]: I0113 21:10:16.615700 2889 scope.go:117] "RemoveContainer" containerID="2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2" Jan 13 21:10:16.618275 containerd[1611]: time="2025-01-13T21:10:16.618228389Z" level=info msg="RemoveContainer for \"2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2\"" Jan 13 21:10:16.623527 containerd[1611]: time="2025-01-13T21:10:16.623430778Z" level=info msg="RemoveContainer for \"2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2\" returns successfully" Jan 13 21:10:16.624069 kubelet[2889]: I0113 21:10:16.623836 2889 scope.go:117] "RemoveContainer" containerID="a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe" Jan 13 21:10:16.627618 containerd[1611]: time="2025-01-13T21:10:16.627061754Z" level=info msg="RemoveContainer for \"a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe\"" Jan 13 21:10:16.632491 containerd[1611]: time="2025-01-13T21:10:16.632442554Z" level=info msg="RemoveContainer for \"a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe\" returns successfully" Jan 13 21:10:16.633156 kubelet[2889]: I0113 21:10:16.633094 2889 scope.go:117] "RemoveContainer" containerID="eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85" Jan 13 21:10:16.633731 containerd[1611]: time="2025-01-13T21:10:16.633662007Z" level=error msg="ContainerStatus for \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\": not found" Jan 13 21:10:16.634039 kubelet[2889]: E0113 21:10:16.633973 2889 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\": not found" containerID="eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85" Jan 13 21:10:16.634198 kubelet[2889]: I0113 21:10:16.634122 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85"} err="failed to get container status \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb5c85c1d435d87a2cbc4f90965e3c49874b10ba8798f926e88b94b9dc535c85\": not found" Jan 13 21:10:16.634198 kubelet[2889]: I0113 21:10:16.634158 2889 scope.go:117] "RemoveContainer" containerID="52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a" Jan 13 21:10:16.635183 containerd[1611]: time="2025-01-13T21:10:16.635045542Z" level=error msg="ContainerStatus for \"52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a\": not found" Jan 13 21:10:16.635557 kubelet[2889]: E0113 21:10:16.635519 2889 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a\": not found" containerID="52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a" Jan 13 21:10:16.635665 kubelet[2889]: I0113 21:10:16.635591 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a"} err="failed to get container status \"52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a\": rpc error: code = NotFound desc = an error occurred when try to find container \"52731cc742f57845b7ac98869f028d0174468273267bd55896593ff50647a31a\": not found" Jan 13 21:10:16.635665 kubelet[2889]: I0113 21:10:16.635620 2889 scope.go:117] "RemoveContainer" containerID="ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804" Jan 13 21:10:16.636139 containerd[1611]: time="2025-01-13T21:10:16.636065685Z" level=error msg="ContainerStatus for \"ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804\": not found" Jan 13 21:10:16.636356 kubelet[2889]: E0113 21:10:16.636320 2889 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804\": not found" containerID="ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804" Jan 13 21:10:16.636475 kubelet[2889]: I0113 21:10:16.636383 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804"} err="failed to get container status \"ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee523cdf2cfe2091761ee857b384d66a1299a4ed70e663867d5d88cbec253804\": not found" Jan 13 21:10:16.636475 kubelet[2889]: I0113 21:10:16.636406 2889 scope.go:117] "RemoveContainer" containerID="2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2" Jan 13 21:10:16.637262 containerd[1611]: time="2025-01-13T21:10:16.636972703Z" level=error msg="ContainerStatus for \"2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2\": not found" Jan 13 21:10:16.637521 kubelet[2889]: E0113 21:10:16.637381 2889 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2\": not found" containerID="2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2" Jan 13 21:10:16.637521 kubelet[2889]: I0113 21:10:16.637439 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2"} err="failed to get container status \"2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e6c83cef1cf8202c074fe05de07da84a9a11dec2707ae49522f2fe859357ac2\": not found" Jan 13 21:10:16.637521 kubelet[2889]: I0113 21:10:16.637462 2889 scope.go:117] "RemoveContainer" containerID="a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe" Jan 13 21:10:16.637917 containerd[1611]: time="2025-01-13T21:10:16.637841567Z" level=error msg="ContainerStatus for \"a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe\": not found" Jan 13 21:10:16.638477 kubelet[2889]: E0113 21:10:16.638417 2889 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe\": not found" containerID="a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe" Jan 13 21:10:16.638652 kubelet[2889]: I0113 21:10:16.638541 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe"} err="failed to get container status \"a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3cc9e63aad1e45951bef4a8629f052a53f90023316d4873dab4f877acba97fe\": not found" Jan 13 21:10:16.874650 kubelet[2889]: E0113 21:10:16.874605 2889 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:10:17.669040 kubelet[2889]: I0113 21:10:17.667751 2889 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0bdb39c5-f754-4d1b-b42e-65e561fdc465" path="/var/lib/kubelet/pods/0bdb39c5-f754-4d1b-b42e-65e561fdc465/volumes" Jan 13 21:10:17.669040 kubelet[2889]: I0113 21:10:17.668792 2889 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d12c9873-1380-4664-9675-5537e6d7cf4c" path="/var/lib/kubelet/pods/d12c9873-1380-4664-9675-5537e6d7cf4c/volumes" Jan 13 21:10:17.713774 sshd[4440]: Connection closed by 172.24.4.1 port 33912 Jan 13 21:10:17.716858 sshd-session[4434]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:17.736549 systemd[1]: Started sshd@21-172.24.4.134:22-172.24.4.1:39640.service - OpenSSH per-connection server daemon (172.24.4.1:39640). Jan 13 21:10:17.737760 systemd[1]: sshd@20-172.24.4.134:22-172.24.4.1:33912.service: Deactivated successfully. Jan 13 21:10:17.752509 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:10:17.755413 systemd-logind[1590]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:10:17.758365 systemd-logind[1590]: Removed session 23. Jan 13 21:10:19.199833 sshd[4598]: Accepted publickey for core from 172.24.4.1 port 39640 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:10:19.204351 sshd-session[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:19.216219 systemd-logind[1590]: New session 24 of user core. Jan 13 21:10:19.226948 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:10:20.401965 kubelet[2889]: I0113 21:10:20.401876 2889 topology_manager.go:215] "Topology Admit Handler" podUID="17fe8719-a463-49ef-93cf-faebbf28453e" podNamespace="kube-system" podName="cilium-8rfjm" Jan 13 21:10:20.405671 kubelet[2889]: E0113 21:10:20.403140 2889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d12c9873-1380-4664-9675-5537e6d7cf4c" containerName="clean-cilium-state" Jan 13 21:10:20.405671 kubelet[2889]: E0113 21:10:20.403169 2889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0bdb39c5-f754-4d1b-b42e-65e561fdc465" containerName="cilium-operator" Jan 13 21:10:20.405671 kubelet[2889]: E0113 21:10:20.403192 2889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d12c9873-1380-4664-9675-5537e6d7cf4c" containerName="mount-cgroup" Jan 13 21:10:20.405671 kubelet[2889]: E0113 21:10:20.403203 2889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d12c9873-1380-4664-9675-5537e6d7cf4c" containerName="apply-sysctl-overwrites" Jan 13 21:10:20.405671 kubelet[2889]: E0113 21:10:20.403214 2889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d12c9873-1380-4664-9675-5537e6d7cf4c" containerName="mount-bpf-fs" Jan 13 21:10:20.405671 kubelet[2889]: E0113 21:10:20.403223 2889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d12c9873-1380-4664-9675-5537e6d7cf4c" containerName="cilium-agent" Jan 13 21:10:20.405671 kubelet[2889]: I0113 21:10:20.403261 2889 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bdb39c5-f754-4d1b-b42e-65e561fdc465" containerName="cilium-operator" Jan 13 21:10:20.405671 kubelet[2889]: I0113 21:10:20.403270 2889 memory_manager.go:354] "RemoveStaleState removing state" podUID="d12c9873-1380-4664-9675-5537e6d7cf4c" containerName="cilium-agent" Jan 13 21:10:20.525376 kubelet[2889]: I0113 21:10:20.525312 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17fe8719-a463-49ef-93cf-faebbf28453e-xtables-lock\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.525500 kubelet[2889]: I0113 21:10:20.525397 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17fe8719-a463-49ef-93cf-faebbf28453e-cilium-config-path\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.525500 kubelet[2889]: I0113 21:10:20.525458 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z52g8\" (UniqueName: \"kubernetes.io/projected/17fe8719-a463-49ef-93cf-faebbf28453e-kube-api-access-z52g8\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.525554 kubelet[2889]: I0113 21:10:20.525506 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/17fe8719-a463-49ef-93cf-faebbf28453e-cilium-run\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.525580 kubelet[2889]: I0113 21:10:20.525559 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/17fe8719-a463-49ef-93cf-faebbf28453e-cilium-ipsec-secrets\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.525976 kubelet[2889]: I0113 21:10:20.525610 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/17fe8719-a463-49ef-93cf-faebbf28453e-hubble-tls\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.525976 kubelet[2889]: I0113 21:10:20.525669 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/17fe8719-a463-49ef-93cf-faebbf28453e-cilium-cgroup\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.525976 kubelet[2889]: I0113 21:10:20.525722 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/17fe8719-a463-49ef-93cf-faebbf28453e-host-proc-sys-net\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.525976 kubelet[2889]: I0113 21:10:20.525777 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/17fe8719-a463-49ef-93cf-faebbf28453e-clustermesh-secrets\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.525976 kubelet[2889]: I0113 21:10:20.525851 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/17fe8719-a463-49ef-93cf-faebbf28453e-hostproc\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.525976 kubelet[2889]: I0113 21:10:20.525880 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17fe8719-a463-49ef-93cf-faebbf28453e-lib-modules\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.526245 kubelet[2889]: I0113 21:10:20.525904 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/17fe8719-a463-49ef-93cf-faebbf28453e-cni-path\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.526245 kubelet[2889]: I0113 21:10:20.525963 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/17fe8719-a463-49ef-93cf-faebbf28453e-bpf-maps\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.526245 kubelet[2889]: I0113 21:10:20.526077 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17fe8719-a463-49ef-93cf-faebbf28453e-etc-cni-netd\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.526245 kubelet[2889]: I0113 21:10:20.526134 2889 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/17fe8719-a463-49ef-93cf-faebbf28453e-host-proc-sys-kernel\") pod \"cilium-8rfjm\" (UID: \"17fe8719-a463-49ef-93cf-faebbf28453e\") " pod="kube-system/cilium-8rfjm" Jan 13 21:10:20.572176 sshd[4604]: Connection closed by 172.24.4.1 port 39640 Jan 13 21:10:20.572631 sshd-session[4598]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:20.577383 systemd[1]: sshd@21-172.24.4.134:22-172.24.4.1:39640.service: Deactivated successfully. Jan 13 21:10:20.577482 systemd-logind[1590]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:10:20.588676 systemd[1]: Started sshd@22-172.24.4.134:22-172.24.4.1:39642.service - OpenSSH per-connection server daemon (172.24.4.1:39642). Jan 13 21:10:20.589452 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:10:20.592228 systemd-logind[1590]: Removed session 24. Jan 13 21:10:20.720953 containerd[1611]: time="2025-01-13T21:10:20.720789262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rfjm,Uid:17fe8719-a463-49ef-93cf-faebbf28453e,Namespace:kube-system,Attempt:0,}" Jan 13 21:10:20.757055 containerd[1611]: time="2025-01-13T21:10:20.756624902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:10:20.757304 containerd[1611]: time="2025-01-13T21:10:20.757014013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:10:20.757304 containerd[1611]: time="2025-01-13T21:10:20.757060172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:10:20.757304 containerd[1611]: time="2025-01-13T21:10:20.757146355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:10:20.818509 containerd[1611]: time="2025-01-13T21:10:20.818442721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rfjm,Uid:17fe8719-a463-49ef-93cf-faebbf28453e,Namespace:kube-system,Attempt:0,} returns sandbox id \"51ed47a1ef6c68a24c9f0e09819b4b3605663b17cd8306a0c97140630544f696\"" Jan 13 21:10:20.822655 containerd[1611]: time="2025-01-13T21:10:20.822596876Z" level=info msg="CreateContainer within sandbox \"51ed47a1ef6c68a24c9f0e09819b4b3605663b17cd8306a0c97140630544f696\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:10:20.840255 containerd[1611]: time="2025-01-13T21:10:20.840195804Z" level=info msg="CreateContainer within sandbox \"51ed47a1ef6c68a24c9f0e09819b4b3605663b17cd8306a0c97140630544f696\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"15e4824557ca20c56dcc9440b5e263b242cb84fecc5d23a03bf0a3c68e5f5899\"" Jan 13 21:10:20.840972 containerd[1611]: time="2025-01-13T21:10:20.840921316Z" level=info msg="StartContainer for \"15e4824557ca20c56dcc9440b5e263b242cb84fecc5d23a03bf0a3c68e5f5899\"" Jan 13 21:10:20.904712 containerd[1611]: time="2025-01-13T21:10:20.904662520Z" level=info msg="StartContainer for \"15e4824557ca20c56dcc9440b5e263b242cb84fecc5d23a03bf0a3c68e5f5899\" returns successfully" Jan 13 21:10:20.960347 containerd[1611]: time="2025-01-13T21:10:20.960285197Z" level=info msg="shim disconnected" id=15e4824557ca20c56dcc9440b5e263b242cb84fecc5d23a03bf0a3c68e5f5899 namespace=k8s.io Jan 13 21:10:20.960347 containerd[1611]: time="2025-01-13T21:10:20.960339421Z" level=warning msg="cleaning up after shim disconnected" id=15e4824557ca20c56dcc9440b5e263b242cb84fecc5d23a03bf0a3c68e5f5899 namespace=k8s.io Jan 13 21:10:20.960347 containerd[1611]: time="2025-01-13T21:10:20.960349229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:10:21.344876 containerd[1611]: time="2025-01-13T21:10:21.343272433Z" level=info msg="CreateContainer within sandbox \"51ed47a1ef6c68a24c9f0e09819b4b3605663b17cd8306a0c97140630544f696\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:10:21.376714 containerd[1611]: time="2025-01-13T21:10:21.376232778Z" level=info msg="CreateContainer within sandbox \"51ed47a1ef6c68a24c9f0e09819b4b3605663b17cd8306a0c97140630544f696\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"65b180c9e9a1f52a19280f3b40d7b5f234c3f825aef6209a59501741270153f7\"" Jan 13 21:10:21.380710 containerd[1611]: time="2025-01-13T21:10:21.378715569Z" level=info msg="StartContainer for \"65b180c9e9a1f52a19280f3b40d7b5f234c3f825aef6209a59501741270153f7\"" Jan 13 21:10:21.456424 containerd[1611]: time="2025-01-13T21:10:21.456374713Z" level=info msg="StartContainer for \"65b180c9e9a1f52a19280f3b40d7b5f234c3f825aef6209a59501741270153f7\" returns successfully" Jan 13 21:10:21.485738 containerd[1611]: time="2025-01-13T21:10:21.485620270Z" level=info msg="shim disconnected" id=65b180c9e9a1f52a19280f3b40d7b5f234c3f825aef6209a59501741270153f7 namespace=k8s.io Jan 13 21:10:21.485738 containerd[1611]: time="2025-01-13T21:10:21.485700334Z" level=warning msg="cleaning up after shim disconnected" id=65b180c9e9a1f52a19280f3b40d7b5f234c3f825aef6209a59501741270153f7 namespace=k8s.io Jan 13 21:10:21.485738 containerd[1611]: time="2025-01-13T21:10:21.485711575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:10:21.877109 kubelet[2889]: E0113 21:10:21.876906 2889 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:10:21.971423 sshd[4614]: Accepted publickey for core from 172.24.4.1 port 39642 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:10:21.977822 sshd-session[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:21.991415 systemd-logind[1590]: New session 25 of user core. Jan 13 21:10:22.000696 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:10:22.365072 containerd[1611]: time="2025-01-13T21:10:22.364909428Z" level=info msg="CreateContainer within sandbox \"51ed47a1ef6c68a24c9f0e09819b4b3605663b17cd8306a0c97140630544f696\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:10:22.482297 containerd[1611]: time="2025-01-13T21:10:22.482176581Z" level=info msg="CreateContainer within sandbox \"51ed47a1ef6c68a24c9f0e09819b4b3605663b17cd8306a0c97140630544f696\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2e6eba27a22bf639cc786d65dbaceb0c5f37c025b8de511113669aacbebd309e\"" Jan 13 21:10:22.484065 containerd[1611]: time="2025-01-13T21:10:22.483945081Z" level=info msg="StartContainer for \"2e6eba27a22bf639cc786d65dbaceb0c5f37c025b8de511113669aacbebd309e\"" Jan 13 21:10:22.623790 containerd[1611]: time="2025-01-13T21:10:22.623029290Z" level=info msg="StartContainer for \"2e6eba27a22bf639cc786d65dbaceb0c5f37c025b8de511113669aacbebd309e\" returns successfully" Jan 13 21:10:22.651566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e6eba27a22bf639cc786d65dbaceb0c5f37c025b8de511113669aacbebd309e-rootfs.mount: Deactivated successfully. Jan 13 21:10:22.697407 sshd[4786]: Connection closed by 172.24.4.1 port 39642 Jan 13 21:10:22.698959 sshd-session[4614]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:22.712788 systemd[1]: Started sshd@23-172.24.4.134:22-172.24.4.1:39658.service - OpenSSH per-connection server daemon (172.24.4.1:39658). Jan 13 21:10:22.715503 systemd[1]: sshd@22-172.24.4.134:22-172.24.4.1:39642.service: Deactivated successfully. Jan 13 21:10:22.723280 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:10:22.725444 systemd-logind[1590]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:10:22.731366 systemd-logind[1590]: Removed session 25. Jan 13 21:10:22.760437 containerd[1611]: time="2025-01-13T21:10:22.760116575Z" level=info msg="shim disconnected" id=2e6eba27a22bf639cc786d65dbaceb0c5f37c025b8de511113669aacbebd309e namespace=k8s.io Jan 13 21:10:22.760437 containerd[1611]: time="2025-01-13T21:10:22.760306447Z" level=warning msg="cleaning up after shim disconnected" id=2e6eba27a22bf639cc786d65dbaceb0c5f37c025b8de511113669aacbebd309e namespace=k8s.io Jan 13 21:10:22.760437 containerd[1611]: time="2025-01-13T21:10:22.760368635Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:10:23.381048 containerd[1611]: time="2025-01-13T21:10:23.379381602Z" level=info msg="CreateContainer within sandbox \"51ed47a1ef6c68a24c9f0e09819b4b3605663b17cd8306a0c97140630544f696\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:10:23.634855 containerd[1611]: time="2025-01-13T21:10:23.634653595Z" level=info msg="CreateContainer within sandbox \"51ed47a1ef6c68a24c9f0e09819b4b3605663b17cd8306a0c97140630544f696\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"95d68a77bb27b006424ee772b97ad18427cf8fa31e9c408ea128997dfece9ce9\"" Jan 13 21:10:23.638818 containerd[1611]: time="2025-01-13T21:10:23.638763375Z" level=info msg="StartContainer for \"95d68a77bb27b006424ee772b97ad18427cf8fa31e9c408ea128997dfece9ce9\"" Jan 13 21:10:23.781466 containerd[1611]: time="2025-01-13T21:10:23.781091253Z" level=info msg="StartContainer for \"95d68a77bb27b006424ee772b97ad18427cf8fa31e9c408ea128997dfece9ce9\" returns successfully" Jan 13 21:10:23.783964 sshd[4834]: Accepted publickey for core from 172.24.4.1 port 39658 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:10:23.787658 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:23.803537 systemd-logind[1590]: New session 26 of user core. Jan 13 21:10:23.806510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95d68a77bb27b006424ee772b97ad18427cf8fa31e9c408ea128997dfece9ce9-rootfs.mount: Deactivated successfully. Jan 13 21:10:23.812437 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:10:23.954439 containerd[1611]: time="2025-01-13T21:10:23.954113435Z" level=info msg="shim disconnected" id=95d68a77bb27b006424ee772b97ad18427cf8fa31e9c408ea128997dfece9ce9 namespace=k8s.io Jan 13 21:10:23.955859 containerd[1611]: time="2025-01-13T21:10:23.955296579Z" level=warning msg="cleaning up after shim disconnected" id=95d68a77bb27b006424ee772b97ad18427cf8fa31e9c408ea128997dfece9ce9 namespace=k8s.io Jan 13 21:10:23.955859 containerd[1611]: time="2025-01-13T21:10:23.955457325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:10:24.403170 containerd[1611]: time="2025-01-13T21:10:24.401437141Z" level=info msg="CreateContainer within sandbox \"51ed47a1ef6c68a24c9f0e09819b4b3605663b17cd8306a0c97140630544f696\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:10:24.431549 containerd[1611]: time="2025-01-13T21:10:24.431497988Z" level=info msg="CreateContainer within sandbox \"51ed47a1ef6c68a24c9f0e09819b4b3605663b17cd8306a0c97140630544f696\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5569aee472846f5dac71269a572933d523ede95d790a3ceb8d2ee42695e5aa5c\"" Jan 13 21:10:24.432593 containerd[1611]: time="2025-01-13T21:10:24.432563278Z" level=info msg="StartContainer for \"5569aee472846f5dac71269a572933d523ede95d790a3ceb8d2ee42695e5aa5c\"" Jan 13 21:10:24.525008 containerd[1611]: time="2025-01-13T21:10:24.523846146Z" level=info msg="StartContainer for \"5569aee472846f5dac71269a572933d523ede95d790a3ceb8d2ee42695e5aa5c\" returns successfully" Jan 13 21:10:24.920082 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:10:24.972647 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 13 21:10:25.297478 kubelet[2889]: I0113 21:10:25.296502 2889 setters.go:568] "Node became not ready" node="ci-4152-2-0-e-56a5643f90.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:10:25Z","lastTransitionTime":"2025-01-13T21:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:10:25.439280 kubelet[2889]: I0113 21:10:25.439030 2889 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8rfjm" podStartSLOduration=5.438965739 podStartE2EDuration="5.438965739s" podCreationTimestamp="2025-01-13 21:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:10:25.438513658 +0000 UTC m=+163.917540223" watchObservedRunningTime="2025-01-13 21:10:25.438965739 +0000 UTC m=+163.917992284" Jan 13 21:10:28.190910 systemd-networkd[1209]: lxc_health: Link UP Jan 13 21:10:28.206709 systemd-networkd[1209]: lxc_health: Gained carrier Jan 13 21:10:29.720314 systemd-networkd[1209]: lxc_health: Gained IPv6LL Jan 13 21:10:33.362063 systemd[1]: run-containerd-runc-k8s.io-5569aee472846f5dac71269a572933d523ede95d790a3ceb8d2ee42695e5aa5c-runc.bwcM4f.mount: Deactivated successfully. Jan 13 21:10:33.589691 sshd[4895]: Connection closed by 172.24.4.1 port 39658 Jan 13 21:10:33.589504 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:33.595391 systemd[1]: sshd@23-172.24.4.134:22-172.24.4.1:39658.service: Deactivated successfully. Jan 13 21:10:33.604344 systemd-logind[1590]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:10:33.606160 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:10:33.608984 systemd-logind[1590]: Removed session 26.