Jan 17 12:11:15.072230 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:11:15.072312 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:11:15.072324 kernel: BIOS-provided physical RAM map: Jan 17 12:11:15.072332 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:11:15.072340 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:11:15.072351 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:11:15.072360 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 17 12:11:15.072369 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 17 12:11:15.072376 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:11:15.072384 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:11:15.072392 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 17 12:11:15.072400 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 17 12:11:15.072408 kernel: NX (Execute Disable) protection: active Jan 17 12:11:15.072416 kernel: APIC: Static calls initialized Jan 17 12:11:15.072428 kernel: SMBIOS 3.0.0 present. Jan 17 12:11:15.072436 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 17 12:11:15.072444 kernel: Hypervisor detected: KVM Jan 17 12:11:15.072453 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:11:15.072461 kernel: kvm-clock: using sched offset of 3924197958 cycles Jan 17 12:11:15.072471 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:11:15.072480 kernel: tsc: Detected 1996.249 MHz processor Jan 17 12:11:15.072489 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:11:15.072498 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:11:15.072507 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 17 12:11:15.072516 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:11:15.072538 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:11:15.072547 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 17 12:11:15.072555 kernel: ACPI: Early table checksum verification disabled Jan 17 12:11:15.072566 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 17 12:11:15.072574 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:11:15.072583 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:11:15.072592 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:11:15.072600 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 17 12:11:15.072608 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:11:15.072617 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:11:15.072625 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 17 12:11:15.072634 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 17 12:11:15.072644 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 17 12:11:15.072652 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 17 12:11:15.072661 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 17 12:11:15.072672 kernel: No NUMA configuration found Jan 17 12:11:15.072681 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 17 12:11:15.072690 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 17 12:11:15.072700 kernel: Zone ranges: Jan 17 12:11:15.072709 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:11:15.072718 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 12:11:15.072727 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 17 12:11:15.072736 kernel: Movable zone start for each node Jan 17 12:11:15.072744 kernel: Early memory node ranges Jan 17 12:11:15.072753 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:11:15.072762 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 17 12:11:15.072772 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 17 12:11:15.072781 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 17 12:11:15.072790 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:11:15.072798 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:11:15.072807 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 17 12:11:15.072816 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:11:15.072825 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:11:15.072834 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:11:15.072843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:11:15.072853 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:11:15.072862 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:11:15.072871 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:11:15.072880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:11:15.072889 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:11:15.072898 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:11:15.072907 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:11:15.072915 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 17 12:11:15.072924 kernel: Booting paravirtualized kernel on KVM Jan 17 12:11:15.072935 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:11:15.072944 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:11:15.072953 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:11:15.072962 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:11:15.072971 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:11:15.072980 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 12:11:15.072990 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:11:15.073000 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:11:15.073010 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:11:15.073019 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:11:15.073028 kernel: Fallback order for Node 0: 0 Jan 17 12:11:15.073037 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 17 12:11:15.073046 kernel: Policy zone: Normal Jan 17 12:11:15.073055 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:11:15.073064 kernel: software IO TLB: area num 2. Jan 17 12:11:15.073073 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 227308K reserved, 0K cma-reserved) Jan 17 12:11:15.073082 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:11:15.073092 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:11:15.073101 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:11:15.073110 kernel: Dynamic Preempt: voluntary Jan 17 12:11:15.073119 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:11:15.073131 kernel: rcu: RCU event tracing is enabled. Jan 17 12:11:15.073141 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:11:15.073150 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:11:15.073159 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:11:15.073167 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:11:15.073177 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:11:15.073187 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:11:15.073196 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:11:15.073205 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:11:15.073214 kernel: Console: colour VGA+ 80x25 Jan 17 12:11:15.073223 kernel: printk: console [tty0] enabled Jan 17 12:11:15.073232 kernel: printk: console [ttyS0] enabled Jan 17 12:11:15.073242 kernel: ACPI: Core revision 20230628 Jan 17 12:11:15.073251 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:11:15.073260 kernel: x2apic enabled Jan 17 12:11:15.073270 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:11:15.073279 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:11:15.073288 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:11:15.073298 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 17 12:11:15.073307 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 12:11:15.073316 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 12:11:15.073324 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:11:15.073333 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:11:15.073342 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:11:15.073353 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:11:15.073362 kernel: Speculative Store Bypass: Vulnerable Jan 17 12:11:15.073371 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 17 12:11:15.073380 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:11:15.073395 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:11:15.073406 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:11:15.073416 kernel: landlock: Up and running. Jan 17 12:11:15.073425 kernel: SELinux: Initializing. Jan 17 12:11:15.073434 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:11:15.073444 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:11:15.073453 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 17 12:11:15.073465 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:11:15.073474 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:11:15.073484 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:11:15.073493 kernel: Performance Events: AMD PMU driver. Jan 17 12:11:15.073503 kernel: ... version: 0 Jan 17 12:11:15.073514 kernel: ... bit width: 48 Jan 17 12:11:15.073535 kernel: ... generic registers: 4 Jan 17 12:11:15.073544 kernel: ... value mask: 0000ffffffffffff Jan 17 12:11:15.073554 kernel: ... max period: 00007fffffffffff Jan 17 12:11:15.073564 kernel: ... fixed-purpose events: 0 Jan 17 12:11:15.073573 kernel: ... event mask: 000000000000000f Jan 17 12:11:15.073582 kernel: signal: max sigframe size: 1440 Jan 17 12:11:15.073592 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:11:15.073601 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:11:15.073614 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:11:15.073624 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:11:15.073633 kernel: .... node #0, CPUs: #1 Jan 17 12:11:15.073642 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:11:15.073652 kernel: smpboot: Max logical packages: 2 Jan 17 12:11:15.073661 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 17 12:11:15.073671 kernel: devtmpfs: initialized Jan 17 12:11:15.073680 kernel: x86/mm: Memory block size: 128MB Jan 17 12:11:15.073690 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:11:15.073699 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:11:15.073710 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:11:15.073720 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:11:15.073729 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:11:15.073739 kernel: audit: type=2000 audit(1737115874.084:1): state=initialized audit_enabled=0 res=1 Jan 17 12:11:15.073748 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:11:15.073758 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:11:15.073767 kernel: cpuidle: using governor menu Jan 17 12:11:15.073777 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:11:15.073787 kernel: dca service started, version 1.12.1 Jan 17 12:11:15.073797 kernel: PCI: Using configuration type 1 for base access Jan 17 12:11:15.073807 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:11:15.073817 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:11:15.073826 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:11:15.073835 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:11:15.073845 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:11:15.073854 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:11:15.073863 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:11:15.073873 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:11:15.073885 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:11:15.073894 kernel: ACPI: Interpreter enabled Jan 17 12:11:15.073904 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:11:15.073913 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:11:15.073922 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:11:15.073932 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:11:15.073941 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 12:11:15.073950 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:11:15.074093 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:11:15.074202 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:11:15.074311 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:11:15.074326 kernel: acpiphp: Slot [3] registered Jan 17 12:11:15.074336 kernel: acpiphp: Slot [4] registered Jan 17 12:11:15.074345 kernel: acpiphp: Slot [5] registered Jan 17 12:11:15.074354 kernel: acpiphp: Slot [6] registered Jan 17 12:11:15.074364 kernel: acpiphp: Slot [7] registered Jan 17 12:11:15.074376 kernel: acpiphp: Slot [8] registered Jan 17 12:11:15.074385 kernel: acpiphp: Slot [9] registered Jan 17 12:11:15.074395 kernel: acpiphp: Slot [10] registered Jan 17 12:11:15.074404 kernel: acpiphp: Slot [11] registered Jan 17 12:11:15.074413 kernel: acpiphp: Slot [12] registered Jan 17 12:11:15.074422 kernel: acpiphp: Slot [13] registered Jan 17 12:11:15.074432 kernel: acpiphp: Slot [14] registered Jan 17 12:11:15.074441 kernel: acpiphp: Slot [15] registered Jan 17 12:11:15.074450 kernel: acpiphp: Slot [16] registered Jan 17 12:11:15.074461 kernel: acpiphp: Slot [17] registered Jan 17 12:11:15.074471 kernel: acpiphp: Slot [18] registered Jan 17 12:11:15.074480 kernel: acpiphp: Slot [19] registered Jan 17 12:11:15.074489 kernel: acpiphp: Slot [20] registered Jan 17 12:11:15.074498 kernel: acpiphp: Slot [21] registered Jan 17 12:11:15.074508 kernel: acpiphp: Slot [22] registered Jan 17 12:11:15.074517 kernel: acpiphp: Slot [23] registered Jan 17 12:11:15.075112 kernel: acpiphp: Slot [24] registered Jan 17 12:11:15.075122 kernel: acpiphp: Slot [25] registered Jan 17 12:11:15.075133 kernel: acpiphp: Slot [26] registered Jan 17 12:11:15.075146 kernel: acpiphp: Slot [27] registered Jan 17 12:11:15.075155 kernel: acpiphp: Slot [28] registered Jan 17 12:11:15.075164 kernel: acpiphp: Slot [29] registered Jan 17 12:11:15.075173 kernel: acpiphp: Slot [30] registered Jan 17 12:11:15.075182 kernel: acpiphp: Slot [31] registered Jan 17 12:11:15.075192 kernel: PCI host bridge to bus 0000:00 Jan 17 12:11:15.075319 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:11:15.075410 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:11:15.075503 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:11:15.075620 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:11:15.075705 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 17 12:11:15.075790 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:11:15.075909 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:11:15.076014 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:11:15.076122 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 12:11:15.076228 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 17 12:11:15.076323 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:11:15.076412 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:11:15.076502 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:11:15.078604 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:11:15.078716 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:11:15.078814 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 12:11:15.078903 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 12:11:15.079004 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 12:11:15.079095 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 12:11:15.079725 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 17 12:11:15.079830 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 17 12:11:15.079928 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 17 12:11:15.080029 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:11:15.080135 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:11:15.080233 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 17 12:11:15.080333 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 17 12:11:15.080422 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 17 12:11:15.080510 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 17 12:11:15.081653 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:11:15.081751 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:11:15.081841 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 17 12:11:15.081932 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 17 12:11:15.082280 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 12:11:15.082376 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 17 12:11:15.082466 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 17 12:11:15.082606 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:11:15.082704 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 17 12:11:15.082792 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 17 12:11:15.082890 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 17 12:11:15.082904 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:11:15.082913 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:11:15.082922 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:11:15.082932 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:11:15.082941 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:11:15.082953 kernel: iommu: Default domain type: Translated Jan 17 12:11:15.082962 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:11:15.082971 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:11:15.082980 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:11:15.082989 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:11:15.082997 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 17 12:11:15.083084 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 12:11:15.083172 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 12:11:15.083266 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:11:15.083291 kernel: vgaarb: loaded Jan 17 12:11:15.083300 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:11:15.083309 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:11:15.083318 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:11:15.083327 kernel: pnp: PnP ACPI init Jan 17 12:11:15.083419 kernel: pnp 00:03: [dma 2] Jan 17 12:11:15.083433 kernel: pnp: PnP ACPI: found 5 devices Jan 17 12:11:15.083442 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:11:15.083454 kernel: NET: Registered PF_INET protocol family Jan 17 12:11:15.083463 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:11:15.083472 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:11:15.083481 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:11:15.083490 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:11:15.083499 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:11:15.083508 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:11:15.083517 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:11:15.083540 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:11:15.083551 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:11:15.083560 kernel: NET: Registered PF_XDP protocol family Jan 17 12:11:15.083648 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:11:15.083729 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:11:15.083809 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:11:15.083889 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 17 12:11:15.083976 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 17 12:11:15.084070 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 12:11:15.084679 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:11:15.084694 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:11:15.084704 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:11:15.084714 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 17 12:11:15.084723 kernel: Initialise system trusted keyrings Jan 17 12:11:15.084733 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:11:15.084742 kernel: Key type asymmetric registered Jan 17 12:11:15.084752 kernel: Asymmetric key parser 'x509' registered Jan 17 12:11:15.084764 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:11:15.084774 kernel: io scheduler mq-deadline registered Jan 17 12:11:15.084783 kernel: io scheduler kyber registered Jan 17 12:11:15.084793 kernel: io scheduler bfq registered Jan 17 12:11:15.084802 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:11:15.084813 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 12:11:15.084822 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:11:15.084832 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:11:15.084842 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:11:15.084853 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:11:15.084863 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:11:15.084873 kernel: random: crng init done Jan 17 12:11:15.084882 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:11:15.084892 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:11:15.084901 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:11:15.084999 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 12:11:15.085015 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:11:15.085101 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 12:11:15.085196 kernel: rtc_cmos 00:04: setting system clock to 2025-01-17T12:11:14 UTC (1737115874) Jan 17 12:11:15.085291 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 17 12:11:15.085309 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:11:15.085318 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:11:15.085327 kernel: Segment Routing with IPv6 Jan 17 12:11:15.085335 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:11:15.085344 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:11:15.085353 kernel: Key type dns_resolver registered Jan 17 12:11:15.085365 kernel: IPI shorthand broadcast: enabled Jan 17 12:11:15.085374 kernel: sched_clock: Marking stable (1053009336, 172556703)->(1260857197, -35291158) Jan 17 12:11:15.085383 kernel: registered taskstats version 1 Jan 17 12:11:15.085392 kernel: Loading compiled-in X.509 certificates Jan 17 12:11:15.085401 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:11:15.085410 kernel: Key type .fscrypt registered Jan 17 12:11:15.085418 kernel: Key type fscrypt-provisioning registered Jan 17 12:11:15.085427 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:11:15.085436 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:11:15.085447 kernel: ima: No architecture policies found Jan 17 12:11:15.085455 kernel: clk: Disabling unused clocks Jan 17 12:11:15.085464 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:11:15.085473 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:11:15.085482 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:11:15.085491 kernel: Run /init as init process Jan 17 12:11:15.085500 kernel: with arguments: Jan 17 12:11:15.085509 kernel: /init Jan 17 12:11:15.085517 kernel: with environment: Jan 17 12:11:15.086568 kernel: HOME=/ Jan 17 12:11:15.086578 kernel: TERM=linux Jan 17 12:11:15.086587 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:11:15.086599 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:11:15.086611 systemd[1]: Detected virtualization kvm. Jan 17 12:11:15.086620 systemd[1]: Detected architecture x86-64. Jan 17 12:11:15.086630 systemd[1]: Running in initrd. Jan 17 12:11:15.086641 systemd[1]: No hostname configured, using default hostname. Jan 17 12:11:15.086650 systemd[1]: Hostname set to . Jan 17 12:11:15.086660 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:11:15.086670 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:11:15.086680 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:11:15.086689 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:11:15.086700 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:11:15.086717 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:11:15.086729 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:11:15.086739 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:11:15.086751 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:11:15.086761 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:11:15.086771 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:11:15.086783 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:11:15.086793 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:11:15.086802 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:11:15.086812 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:11:15.086822 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:11:15.086832 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:11:15.086841 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:11:15.086851 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:11:15.086863 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:11:15.086873 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:11:15.086883 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:11:15.086892 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:11:15.086902 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:11:15.086912 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:11:15.086922 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:11:15.086932 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:11:15.086942 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:11:15.086953 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:11:15.086963 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:11:15.086992 systemd-journald[184]: Collecting audit messages is disabled. Jan 17 12:11:15.087016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:11:15.087029 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:11:15.087039 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:11:15.087049 systemd-journald[184]: Journal started Jan 17 12:11:15.087074 systemd-journald[184]: Runtime Journal (/run/log/journal/53e8f721026c4b5392e9b3b43b01db91) is 8.0M, max 78.3M, 70.3M free. Jan 17 12:11:15.077024 systemd-modules-load[185]: Inserted module 'overlay' Jan 17 12:11:15.091547 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:11:15.091574 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:11:15.112557 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:11:15.113971 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 17 12:11:15.138206 kernel: Bridge firewalling registered Jan 17 12:11:15.138143 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:11:15.138936 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:11:15.149825 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:11:15.151975 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:11:15.154753 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:11:15.166300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:11:15.176273 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:11:15.185881 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:11:15.188960 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:11:15.190259 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:11:15.191709 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:11:15.199724 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:11:15.202689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:11:15.203536 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:11:15.222159 dracut-cmdline[217]: dracut-dracut-053 Jan 17 12:11:15.227058 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:11:15.239293 systemd-resolved[218]: Positive Trust Anchors: Jan 17 12:11:15.239308 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:11:15.239349 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:11:15.242598 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 17 12:11:15.243962 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:11:15.245656 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:11:15.319629 kernel: SCSI subsystem initialized Jan 17 12:11:15.330610 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:11:15.344649 kernel: iscsi: registered transport (tcp) Jan 17 12:11:15.367977 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:11:15.368042 kernel: QLogic iSCSI HBA Driver Jan 17 12:11:15.430837 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:11:15.442874 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:11:15.494630 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:11:15.494760 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:11:15.497593 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:11:15.546627 kernel: raid6: sse2x4 gen() 12931 MB/s Jan 17 12:11:15.564634 kernel: raid6: sse2x2 gen() 14349 MB/s Jan 17 12:11:15.582930 kernel: raid6: sse2x1 gen() 9815 MB/s Jan 17 12:11:15.582991 kernel: raid6: using algorithm sse2x2 gen() 14349 MB/s Jan 17 12:11:15.602095 kernel: raid6: .... xor() 9344 MB/s, rmw enabled Jan 17 12:11:15.602189 kernel: raid6: using ssse3x2 recovery algorithm Jan 17 12:11:15.625233 kernel: xor: measuring software checksum speed Jan 17 12:11:15.625313 kernel: prefetch64-sse : 18469 MB/sec Jan 17 12:11:15.625747 kernel: generic_sse : 16844 MB/sec Jan 17 12:11:15.626864 kernel: xor: using function: prefetch64-sse (18469 MB/sec) Jan 17 12:11:15.818663 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:11:15.836872 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:11:15.843854 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:11:15.856759 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 17 12:11:15.861150 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:11:15.870879 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:11:15.898122 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 17 12:11:15.944803 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:11:15.953874 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:11:16.004281 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:11:16.012850 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:11:16.053943 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:11:16.055827 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:11:16.057127 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:11:16.059151 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:11:16.066716 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:11:16.082545 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 17 12:11:16.137215 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 17 12:11:16.137337 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:11:16.137351 kernel: GPT:17805311 != 20971519 Jan 17 12:11:16.137363 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:11:16.137375 kernel: GPT:17805311 != 20971519 Jan 17 12:11:16.137385 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:11:16.137400 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:11:16.082721 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:11:16.129810 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:11:16.129954 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:11:16.130713 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:11:16.131290 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:11:16.143854 kernel: libata version 3.00 loaded. Jan 17 12:11:16.131421 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:11:16.132015 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:11:16.140318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:11:16.163482 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 12:11:16.193650 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (469) Jan 17 12:11:16.193756 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (467) Jan 17 12:11:16.193775 kernel: scsi host0: ata_piix Jan 17 12:11:16.193902 kernel: scsi host1: ata_piix Jan 17 12:11:16.194009 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 17 12:11:16.194022 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 17 12:11:16.209618 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:11:16.245069 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:11:16.253293 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:11:16.258708 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:11:16.259361 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:11:16.267389 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:11:16.278937 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:11:16.284117 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:11:16.317487 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:11:16.528923 disk-uuid[503]: Primary Header is updated. Jan 17 12:11:16.528923 disk-uuid[503]: Secondary Entries is updated. Jan 17 12:11:16.528923 disk-uuid[503]: Secondary Header is updated. Jan 17 12:11:16.540600 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:11:16.554597 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:11:16.569661 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:11:17.571602 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:11:17.572617 disk-uuid[513]: The operation has completed successfully. Jan 17 12:11:17.652604 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:11:17.652704 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:11:17.676653 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:11:17.703013 sh[528]: Success Jan 17 12:11:17.733645 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 17 12:11:17.851317 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:11:17.854748 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:11:17.863730 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:11:17.907614 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:11:17.907705 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:11:17.912268 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:11:17.917199 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:11:17.920937 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:11:17.940947 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:11:17.943333 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:11:17.952859 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:11:17.965035 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:11:18.001673 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:11:18.001780 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:11:18.001804 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:11:18.009542 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:11:18.022179 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:11:18.026566 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:11:18.039217 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:11:18.049733 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:11:18.103054 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:11:18.109676 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:11:18.139070 systemd-networkd[712]: lo: Link UP Jan 17 12:11:18.139080 systemd-networkd[712]: lo: Gained carrier Jan 17 12:11:18.140322 systemd-networkd[712]: Enumeration completed Jan 17 12:11:18.140497 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:11:18.140851 systemd-networkd[712]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:11:18.140855 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:11:18.141440 systemd[1]: Reached target network.target - Network. Jan 17 12:11:18.143223 systemd-networkd[712]: eth0: Link UP Jan 17 12:11:18.143228 systemd-networkd[712]: eth0: Gained carrier Jan 17 12:11:18.143238 systemd-networkd[712]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:11:18.169053 systemd-networkd[712]: eth0: DHCPv4 address 172.24.4.139/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 17 12:11:18.204842 ignition[650]: Ignition 2.19.0 Jan 17 12:11:18.205691 ignition[650]: Stage: fetch-offline Jan 17 12:11:18.206226 ignition[650]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:11:18.206775 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:11:18.206910 ignition[650]: parsed url from cmdline: "" Jan 17 12:11:18.206915 ignition[650]: no config URL provided Jan 17 12:11:18.206922 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:11:18.206932 ignition[650]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:11:18.209637 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:11:18.206938 ignition[650]: failed to fetch config: resource requires networking Jan 17 12:11:18.207134 ignition[650]: Ignition finished successfully Jan 17 12:11:18.215716 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:11:18.231771 ignition[723]: Ignition 2.19.0 Jan 17 12:11:18.231792 ignition[723]: Stage: fetch Jan 17 12:11:18.232086 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:11:18.232107 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:11:18.232273 ignition[723]: parsed url from cmdline: "" Jan 17 12:11:18.232279 ignition[723]: no config URL provided Jan 17 12:11:18.232289 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:11:18.232305 ignition[723]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:11:18.232501 ignition[723]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 17 12:11:18.232551 ignition[723]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 17 12:11:18.232768 ignition[723]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 17 12:11:18.421016 ignition[723]: GET result: OK Jan 17 12:11:18.421409 ignition[723]: parsing config with SHA512: 26320eac4a30428358ab77a4dcd115ac8b53da1ec1c0e20c69239db5f75533f37b5443c8067dcdc89ef11bfae54d97951506c3a70fb9f41dc982e83660818827 Jan 17 12:11:18.434729 unknown[723]: fetched base config from "system" Jan 17 12:11:18.434783 unknown[723]: fetched base config from "system" Jan 17 12:11:18.435878 ignition[723]: fetch: fetch complete Jan 17 12:11:18.434798 unknown[723]: fetched user config from "openstack" Jan 17 12:11:18.435895 ignition[723]: fetch: fetch passed Jan 17 12:11:18.439380 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:11:18.436002 ignition[723]: Ignition finished successfully Jan 17 12:11:18.448966 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:11:18.490695 ignition[729]: Ignition 2.19.0 Jan 17 12:11:18.490722 ignition[729]: Stage: kargs Jan 17 12:11:18.491131 ignition[729]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:11:18.491159 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:11:18.493499 ignition[729]: kargs: kargs passed Jan 17 12:11:18.493655 ignition[729]: Ignition finished successfully Jan 17 12:11:18.495122 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:11:18.504681 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:11:18.533438 ignition[735]: Ignition 2.19.0 Jan 17 12:11:18.533466 ignition[735]: Stage: disks Jan 17 12:11:18.533934 ignition[735]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:11:18.533961 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:11:18.536298 ignition[735]: disks: disks passed Jan 17 12:11:18.536417 ignition[735]: Ignition finished successfully Jan 17 12:11:18.538325 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:11:18.539100 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:11:18.540079 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:11:18.541364 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:11:18.542638 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:11:18.543691 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:11:18.550654 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:11:18.569154 systemd-fsck[743]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:11:18.581500 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:11:18.586656 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:11:18.699624 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:11:18.701752 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:11:18.704123 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:11:18.713706 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:11:18.716964 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:11:18.718988 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:11:18.721206 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 17 12:11:18.723577 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:11:18.723646 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:11:18.731552 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (751) Jan 17 12:11:18.736590 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:11:18.736689 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:11:18.736721 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:11:18.743882 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:11:18.762764 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:11:18.751755 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:11:18.767324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:11:18.875018 initrd-setup-root[779]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:11:18.883432 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:11:18.894347 initrd-setup-root[793]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:11:18.903585 initrd-setup-root[800]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:11:19.201157 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:11:19.211725 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:11:19.215893 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:11:19.238816 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:11:19.244840 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:11:19.284557 ignition[867]: INFO : Ignition 2.19.0 Jan 17 12:11:19.284557 ignition[867]: INFO : Stage: mount Jan 17 12:11:19.287837 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:11:19.287837 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:11:19.287837 ignition[867]: INFO : mount: mount passed Jan 17 12:11:19.287837 ignition[867]: INFO : Ignition finished successfully Jan 17 12:11:19.287304 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:11:19.296459 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:11:19.976808 systemd-networkd[712]: eth0: Gained IPv6LL Jan 17 12:11:25.969031 coreos-metadata[753]: Jan 17 12:11:25.968 WARN failed to locate config-drive, using the metadata service API instead Jan 17 12:11:26.009728 coreos-metadata[753]: Jan 17 12:11:26.009 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 12:11:26.025005 coreos-metadata[753]: Jan 17 12:11:26.024 INFO Fetch successful Jan 17 12:11:26.026459 coreos-metadata[753]: Jan 17 12:11:26.025 INFO wrote hostname ci-4081-3-0-e-f0dad07f0f.novalocal to /sysroot/etc/hostname Jan 17 12:11:26.029050 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 17 12:11:26.029282 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 17 12:11:26.040755 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:11:26.070888 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:11:26.090649 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (884) Jan 17 12:11:26.092618 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:11:26.097722 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:11:26.101885 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:11:26.113600 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:11:26.118285 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:11:26.163675 ignition[902]: INFO : Ignition 2.19.0 Jan 17 12:11:26.163675 ignition[902]: INFO : Stage: files Jan 17 12:11:26.163675 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:11:26.163675 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:11:26.170210 ignition[902]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:11:26.172071 ignition[902]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:11:26.172071 ignition[902]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:11:26.177827 ignition[902]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:11:26.178768 ignition[902]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:11:26.179603 ignition[902]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:11:26.178831 unknown[902]: wrote ssh authorized keys file for user: core Jan 17 12:11:26.183029 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:11:26.184104 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:11:26.248165 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:11:26.542916 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:11:26.542916 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:11:26.547730 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 12:11:27.065169 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:11:27.491461 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:11:27.491461 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:11:27.496420 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:11:27.496420 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:11:27.496420 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:11:27.496420 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:11:27.496420 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:11:27.496420 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:11:27.496420 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:11:27.496420 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:11:27.496420 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:11:27.496420 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:11:27.496420 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:11:27.496420 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:11:27.496420 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 17 12:11:28.084306 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:11:30.527135 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:11:30.527135 ignition[902]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 12:11:30.535877 ignition[902]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:11:30.535877 ignition[902]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:11:30.535877 ignition[902]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 12:11:30.535877 ignition[902]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:11:30.535877 ignition[902]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:11:30.535877 ignition[902]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:11:30.535877 ignition[902]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:11:30.535877 ignition[902]: INFO : files: files passed Jan 17 12:11:30.535877 ignition[902]: INFO : Ignition finished successfully Jan 17 12:11:30.531237 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:11:30.543817 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:11:30.547543 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:11:30.556277 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:11:30.556503 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:11:30.569011 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:11:30.569011 initrd-setup-root-after-ignition[931]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:11:30.572601 initrd-setup-root-after-ignition[935]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:11:30.575830 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:11:30.578944 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:11:30.594784 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:11:30.657832 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:11:30.658073 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:11:30.661315 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:11:30.664196 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:11:30.667179 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:11:30.682910 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:11:30.713836 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:11:30.723808 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:11:30.758700 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:11:30.762154 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:11:30.764007 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:11:30.766880 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:11:30.767173 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:11:30.770397 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:11:30.772472 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:11:30.775403 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:11:30.777969 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:11:30.780595 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:11:30.783505 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:11:30.786488 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:11:30.789658 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:11:30.792694 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:11:30.795676 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:11:30.798360 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:11:30.798677 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:11:30.801894 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:11:30.804003 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:11:30.806944 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:11:30.809284 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:11:30.811712 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:11:30.812118 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:11:30.815496 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:11:30.815948 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:11:30.819658 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:11:30.820031 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:11:30.831079 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:11:30.833704 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:11:30.834022 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:11:30.845910 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:11:30.847242 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:11:30.848832 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:11:30.859000 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:11:30.859562 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:11:30.873700 ignition[955]: INFO : Ignition 2.19.0 Jan 17 12:11:30.873929 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:11:30.874081 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:11:30.878378 ignition[955]: INFO : Stage: umount Jan 17 12:11:30.878378 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:11:30.878378 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:11:30.878378 ignition[955]: INFO : umount: umount passed Jan 17 12:11:30.878378 ignition[955]: INFO : Ignition finished successfully Jan 17 12:11:30.878241 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:11:30.878350 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:11:30.879361 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:11:30.879432 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:11:30.880122 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:11:30.880163 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:11:30.880769 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:11:30.880810 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:11:30.881985 systemd[1]: Stopped target network.target - Network. Jan 17 12:11:30.884560 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:11:30.884613 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:11:30.886902 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:11:30.887403 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:11:30.893822 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:11:30.895070 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:11:30.896724 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:11:30.897533 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:11:30.897575 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:11:30.898073 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:11:30.898105 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:11:30.900641 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:11:30.900685 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:11:30.901317 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:11:30.901356 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:11:30.902134 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:11:30.904789 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:11:30.906572 systemd-networkd[712]: eth0: DHCPv6 lease lost Jan 17 12:11:30.906767 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:11:30.907731 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:11:30.907846 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:11:30.909667 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:11:30.909720 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:11:30.917708 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:11:30.920135 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:11:30.920190 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:11:30.921335 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:11:30.922638 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:11:30.922723 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:11:30.926016 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:11:30.926084 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:11:30.929091 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:11:30.929138 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:11:30.930131 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:11:30.930171 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:11:30.932730 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:11:30.932868 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:11:30.934800 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:11:30.934839 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:11:30.938469 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:11:30.938503 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:11:30.939954 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:11:30.940001 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:11:30.941627 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:11:30.941669 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:11:30.942778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:11:30.942818 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:11:30.946664 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:11:30.947273 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:11:30.947320 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:11:30.949675 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:11:30.949724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:11:30.950711 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:11:30.951484 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:11:30.958571 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:11:30.958703 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:11:31.065617 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:11:31.065873 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:11:31.069643 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:11:31.071408 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:11:31.071577 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:11:31.086957 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:11:31.102157 systemd[1]: Switching root. Jan 17 12:11:31.151826 systemd-journald[184]: Journal stopped Jan 17 12:11:33.191160 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 17 12:11:33.191221 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:11:33.191237 kernel: SELinux: policy capability open_perms=1 Jan 17 12:11:33.191293 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:11:33.191306 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:11:33.191317 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:11:33.191329 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:11:33.191342 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:11:33.191359 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:11:33.191372 systemd[1]: Successfully loaded SELinux policy in 78.156ms. Jan 17 12:11:33.191393 kernel: audit: type=1403 audit(1737115891.720:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:11:33.191409 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.358ms. Jan 17 12:11:33.191424 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:11:33.191438 systemd[1]: Detected virtualization kvm. Jan 17 12:11:33.191452 systemd[1]: Detected architecture x86-64. Jan 17 12:11:33.191465 systemd[1]: Detected first boot. Jan 17 12:11:33.191480 systemd[1]: Hostname set to . Jan 17 12:11:33.191494 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:11:33.191509 zram_generator::config[998]: No configuration found. Jan 17 12:11:33.194387 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:11:33.194406 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:11:33.194424 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:11:33.194439 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:11:33.194453 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:11:33.194470 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:11:33.194484 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:11:33.194497 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:11:33.194511 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:11:33.194552 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:11:33.194567 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:11:33.194602 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:11:33.194617 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:11:33.194631 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:11:33.194648 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:11:33.194661 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:11:33.194675 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:11:33.194689 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:11:33.194702 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:11:33.194715 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:11:33.194729 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:11:33.194742 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:11:33.194758 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:11:33.194771 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:11:33.194860 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:11:33.194887 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:11:33.194900 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:11:33.194913 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:11:33.194927 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:11:33.194943 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:11:33.194956 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:11:33.194970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:11:33.194983 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:11:33.194996 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:11:33.195010 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:11:33.195023 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:11:33.195036 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:11:33.195050 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:11:33.195065 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:11:33.195079 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:11:33.195093 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:11:33.195108 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:11:33.195122 systemd[1]: Reached target machines.target - Containers. Jan 17 12:11:33.195134 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:11:33.195147 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:11:33.195159 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:11:33.195172 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:11:33.195186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:11:33.195199 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:11:33.195211 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:11:33.195224 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:11:33.195236 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:11:33.195267 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:11:33.195281 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:11:33.195294 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:11:33.195308 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:11:33.195321 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:11:33.195333 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:11:33.195349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:11:33.195363 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:11:33.195376 kernel: fuse: init (API version 7.39) Jan 17 12:11:33.195388 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:11:33.195402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:11:33.195414 kernel: loop: module loaded Jan 17 12:11:33.195429 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:11:33.195442 systemd[1]: Stopped verity-setup.service. Jan 17 12:11:33.195473 systemd-journald[1094]: Collecting audit messages is disabled. Jan 17 12:11:33.195502 systemd-journald[1094]: Journal started Jan 17 12:11:33.195552 systemd-journald[1094]: Runtime Journal (/run/log/journal/53e8f721026c4b5392e9b3b43b01db91) is 8.0M, max 78.3M, 70.3M free. Jan 17 12:11:32.854687 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:11:32.874670 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:11:32.875129 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:11:33.205890 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:11:33.205953 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:11:33.208263 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:11:33.208910 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:11:33.209588 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:11:33.210178 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:11:33.210802 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:11:33.211398 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:11:33.212183 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:11:33.213613 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:11:33.214853 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:11:33.215706 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:11:33.216518 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:11:33.218261 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:11:33.219104 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:11:33.219237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:11:33.224117 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:11:33.224255 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:11:33.225009 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:11:33.225120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:11:33.225840 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:11:33.226571 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:11:33.227304 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:11:33.260261 kernel: ACPI: bus type drm_connector registered Jan 17 12:11:33.262030 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:11:33.262183 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:11:33.263397 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:11:33.270180 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:11:33.273248 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:11:33.273940 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:11:33.273984 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:11:33.275704 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:11:33.277749 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:11:33.282878 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:11:33.283635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:11:33.286679 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:11:33.288686 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:11:33.289297 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:11:33.293804 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:11:33.295700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:11:33.296871 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:11:33.299661 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:11:33.301934 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:11:33.307080 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:11:33.307983 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:11:33.308685 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:11:33.309588 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:11:33.338794 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:11:33.340629 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:11:33.342351 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:11:33.350762 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:11:33.358566 kernel: loop0: detected capacity change from 0 to 210664 Jan 17 12:11:33.358680 systemd-journald[1094]: Time spent on flushing to /var/log/journal/53e8f721026c4b5392e9b3b43b01db91 is 35.543ms for 955 entries. Jan 17 12:11:33.358680 systemd-journald[1094]: System Journal (/var/log/journal/53e8f721026c4b5392e9b3b43b01db91) is 8.0M, max 584.8M, 576.8M free. Jan 17 12:11:33.449609 systemd-journald[1094]: Received client request to flush runtime journal. Jan 17 12:11:33.379627 udevadm[1139]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:11:33.382890 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:11:33.454470 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:11:33.463750 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:11:33.466760 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:11:33.480686 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:11:33.486679 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:11:33.493944 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:11:33.523572 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 12:11:33.536763 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Jan 17 12:11:33.537221 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Jan 17 12:11:33.548019 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:11:33.591647 kernel: loop2: detected capacity change from 0 to 8 Jan 17 12:11:33.615559 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 12:11:33.712672 kernel: loop4: detected capacity change from 0 to 210664 Jan 17 12:11:33.751563 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 12:11:33.822300 kernel: loop6: detected capacity change from 0 to 8 Jan 17 12:11:33.824593 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 12:11:33.868062 (sd-merge)[1159]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 17 12:11:33.871634 (sd-merge)[1159]: Merged extensions into '/usr'. Jan 17 12:11:33.880314 systemd[1]: Reloading requested from client PID 1132 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:11:33.880344 systemd[1]: Reloading... Jan 17 12:11:33.986567 zram_generator::config[1188]: No configuration found. Jan 17 12:11:34.173037 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:11:34.232161 systemd[1]: Reloading finished in 351 ms. Jan 17 12:11:34.257207 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:11:34.264681 systemd[1]: Starting ensure-sysext.service... Jan 17 12:11:34.266334 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:11:34.292413 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:11:34.293248 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:11:34.294353 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:11:34.294741 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jan 17 12:11:34.294815 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jan 17 12:11:34.375019 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:11:34.375289 systemd[1]: Reloading... Jan 17 12:11:34.379497 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:11:34.379518 systemd-tmpfiles[1241]: Skipping /boot Jan 17 12:11:34.393962 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:11:34.393983 systemd-tmpfiles[1241]: Skipping /boot Jan 17 12:11:34.478711 zram_generator::config[1266]: No configuration found. Jan 17 12:11:34.636082 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:11:34.696343 systemd[1]: Reloading finished in 320 ms. Jan 17 12:11:34.715107 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:11:34.722139 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:11:34.735908 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:11:34.750705 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:11:34.753998 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:11:34.764060 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:11:34.767228 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:11:34.770796 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:11:34.779854 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:11:34.780055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:11:34.787833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:11:34.794073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:11:34.812911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:11:34.814652 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:11:34.814794 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:11:34.815896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:11:34.817591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:11:34.823056 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:11:34.823625 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:11:34.824774 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:11:34.824920 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:11:34.831932 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:11:34.835458 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:11:34.835851 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:11:34.839159 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Jan 17 12:11:34.843831 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:11:34.846929 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:11:34.853187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:11:34.854682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:11:34.863104 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:11:34.864314 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:11:34.866487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:11:34.866787 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:11:34.867734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:11:34.867853 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:11:34.868858 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:11:34.868987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:11:34.875506 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:11:34.875748 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:11:34.881930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:11:34.885695 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:11:34.893778 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:11:34.900768 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:11:34.901380 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:11:34.901563 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:11:34.902660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:11:34.902836 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:11:34.903823 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:11:34.903996 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:11:34.904897 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:11:34.905093 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:11:34.908362 systemd[1]: Finished ensure-sysext.service. Jan 17 12:11:34.913742 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:11:34.918860 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:11:34.924594 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:11:34.924808 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:11:34.925761 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:11:35.044499 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:11:35.120155 augenrules[1377]: No rules Jan 17 12:11:35.122682 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:11:35.126105 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:11:35.185762 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:11:35.186720 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:11:35.193243 systemd-resolved[1332]: Positive Trust Anchors: Jan 17 12:11:35.193265 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:11:35.193308 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:11:35.217751 systemd-resolved[1332]: Using system hostname 'ci-4081-3-0-e-f0dad07f0f.novalocal'. Jan 17 12:11:35.219962 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:11:35.221632 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:11:35.238047 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:11:35.258505 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:11:35.325019 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:11:35.371564 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1396) Jan 17 12:11:35.386075 systemd-networkd[1394]: lo: Link UP Jan 17 12:11:35.386085 systemd-networkd[1394]: lo: Gained carrier Jan 17 12:11:35.389650 systemd-networkd[1394]: Enumeration completed Jan 17 12:11:35.390497 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:11:35.391837 systemd[1]: Reached target network.target - Network. Jan 17 12:11:35.393722 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:11:35.393731 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:11:35.398821 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:11:35.398860 systemd-networkd[1394]: eth0: Link UP Jan 17 12:11:35.398863 systemd-networkd[1394]: eth0: Gained carrier Jan 17 12:11:35.398876 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:11:35.401781 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:11:35.414716 systemd-networkd[1394]: eth0: DHCPv4 address 172.24.4.139/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 17 12:11:35.416177 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Jan 17 12:11:35.483179 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:11:35.488562 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 12:11:35.517611 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:11:35.489640 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:11:35.500765 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:11:35.520746 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:11:35.531465 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:11:35.532298 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:11:35.532388 ldconfig[1127]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:11:35.547149 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:11:35.556887 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:11:35.565607 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:11:35.578566 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:11:35.586335 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 12:11:35.586399 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 12:11:35.591577 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:11:35.592944 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 12:11:35.592988 kernel: [drm] features: -context_init Jan 17 12:11:35.593021 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:11:35.595821 kernel: [drm] number of scanouts: 1 Jan 17 12:11:35.596543 kernel: [drm] number of cap sets: 0 Jan 17 12:11:35.600558 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 12:11:35.611988 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 12:11:35.612084 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 12:11:35.606770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:11:35.624500 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 12:11:35.637161 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:11:35.637514 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:11:35.652968 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:11:35.658078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:11:35.658337 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:11:35.667671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:11:35.668145 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:11:35.671789 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:11:35.693727 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:11:35.733120 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:11:35.733341 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:11:35.739723 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:11:35.745746 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:11:35.758783 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:11:35.760148 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:11:35.760401 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:11:35.760568 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:11:35.760875 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:11:35.761061 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:11:35.761163 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:11:35.761249 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:11:35.761276 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:11:35.761347 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:11:35.762512 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:11:35.764763 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:11:35.778790 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:11:35.781021 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:11:35.784640 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:11:35.788258 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:11:35.793454 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:11:35.797309 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:11:35.797634 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:11:35.810901 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:11:35.821822 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:11:35.838730 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:11:35.851712 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:11:35.865261 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:11:35.867650 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:11:35.873516 jq[1451]: false Jan 17 12:11:35.879765 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:11:35.886397 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:11:35.901841 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:11:35.912732 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:11:35.927715 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:11:35.928871 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:11:35.929428 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:11:35.934417 extend-filesystems[1452]: Found loop4 Jan 17 12:11:35.934417 extend-filesystems[1452]: Found loop5 Jan 17 12:11:35.934417 extend-filesystems[1452]: Found loop6 Jan 17 12:11:35.934417 extend-filesystems[1452]: Found loop7 Jan 17 12:11:35.934417 extend-filesystems[1452]: Found vda Jan 17 12:11:35.934417 extend-filesystems[1452]: Found vda1 Jan 17 12:11:35.934417 extend-filesystems[1452]: Found vda2 Jan 17 12:11:35.934417 extend-filesystems[1452]: Found vda3 Jan 17 12:11:35.934417 extend-filesystems[1452]: Found usr Jan 17 12:11:35.934417 extend-filesystems[1452]: Found vda4 Jan 17 12:11:35.934417 extend-filesystems[1452]: Found vda6 Jan 17 12:11:35.934417 extend-filesystems[1452]: Found vda7 Jan 17 12:11:35.934417 extend-filesystems[1452]: Found vda9 Jan 17 12:11:35.934417 extend-filesystems[1452]: Checking size of /dev/vda9 Jan 17 12:11:36.034145 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 17 12:11:36.034180 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1400) Jan 17 12:11:35.987702 dbus-daemon[1448]: [system] SELinux support is enabled Jan 17 12:11:35.941034 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:11:36.034608 extend-filesystems[1452]: Resized partition /dev/vda9 Jan 17 12:11:36.041646 update_engine[1465]: I20250117 12:11:35.999373 1465 main.cc:92] Flatcar Update Engine starting Jan 17 12:11:36.041646 update_engine[1465]: I20250117 12:11:36.002365 1465 update_check_scheduler.cc:74] Next update check in 2m28s Jan 17 12:11:35.953666 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:11:36.045483 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:11:36.054365 jq[1468]: true Jan 17 12:11:35.977970 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:11:35.978171 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:11:35.978507 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:11:35.978768 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:11:35.993232 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:11:36.019307 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:11:36.076689 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 17 12:11:36.019499 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:11:36.109772 jq[1476]: true Jan 17 12:11:36.109866 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:11:36.109866 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:11:36.109866 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 17 12:11:36.063072 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:11:36.131840 extend-filesystems[1452]: Resized filesystem in /dev/vda9 Jan 17 12:11:36.132305 tar[1475]: linux-amd64/helm Jan 17 12:11:36.065561 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:11:36.065590 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:11:36.067629 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:11:36.067650 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:11:36.077719 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:11:36.094331 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:11:36.122280 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:11:36.122480 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:11:36.203235 systemd-logind[1464]: New seat seat0. Jan 17 12:11:36.206006 systemd-logind[1464]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:11:36.206579 systemd-logind[1464]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:11:36.208630 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:11:36.259753 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:11:36.262567 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:11:36.278026 systemd[1]: Starting sshkeys.service... Jan 17 12:11:36.307631 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:11:36.315927 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:11:36.343828 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:11:36.495871 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:11:36.528828 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:11:36.545850 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:11:36.578166 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:11:36.578425 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:11:36.587102 containerd[1482]: time="2025-01-17T12:11:36.586998731Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:11:36.590818 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:11:36.622008 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:11:36.623693 containerd[1482]: time="2025-01-17T12:11:36.623431450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:11:36.625347 containerd[1482]: time="2025-01-17T12:11:36.625312248Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:11:36.625424 containerd[1482]: time="2025-01-17T12:11:36.625407767Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:11:36.625512 containerd[1482]: time="2025-01-17T12:11:36.625494780Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:11:36.625886 containerd[1482]: time="2025-01-17T12:11:36.625865776Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:11:36.625957 containerd[1482]: time="2025-01-17T12:11:36.625942129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:11:36.626554 containerd[1482]: time="2025-01-17T12:11:36.626072103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:11:36.626554 containerd[1482]: time="2025-01-17T12:11:36.626093323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:11:36.626554 containerd[1482]: time="2025-01-17T12:11:36.626276025Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:11:36.626554 containerd[1482]: time="2025-01-17T12:11:36.626294991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:11:36.626554 containerd[1482]: time="2025-01-17T12:11:36.626315800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:11:36.626554 containerd[1482]: time="2025-01-17T12:11:36.626328383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:11:36.626554 containerd[1482]: time="2025-01-17T12:11:36.626412040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:11:36.626894 containerd[1482]: time="2025-01-17T12:11:36.626872153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:11:36.628371 containerd[1482]: time="2025-01-17T12:11:36.628347881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:11:36.628440 containerd[1482]: time="2025-01-17T12:11:36.628425977Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:11:36.628862 containerd[1482]: time="2025-01-17T12:11:36.628644227Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:11:36.628862 containerd[1482]: time="2025-01-17T12:11:36.628712395Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:11:36.634298 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.641986781Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.642051151Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.642074255Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.642092619Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.642109361Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.642264411Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.642557471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.642665494Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.642684329Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.642698876Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.642713944Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.642728993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.642743640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:11:36.644041 containerd[1482]: time="2025-01-17T12:11:36.642759300Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.642780579Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.642800397Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.642820965Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.642834200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.642857424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.642873434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.642887039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.642904812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.642919179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.642938906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.642952522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.642966568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.642986305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.644435 containerd[1482]: time="2025-01-17T12:11:36.643002906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643015821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643029516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643045737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643065985Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643092274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643105809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643120447Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643167445Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643187633Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643200637Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643214503Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643226586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643257203Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:11:36.650757 containerd[1482]: time="2025-01-17T12:11:36.643270879Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:11:36.645148 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:11:36.651269 containerd[1482]: time="2025-01-17T12:11:36.643282110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:11:36.647068 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:11:36.651335 containerd[1482]: time="2025-01-17T12:11:36.646196135Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:11:36.651335 containerd[1482]: time="2025-01-17T12:11:36.646271326Z" level=info msg="Connect containerd service" Jan 17 12:11:36.651335 containerd[1482]: time="2025-01-17T12:11:36.646309869Z" level=info msg="using legacy CRI server" Jan 17 12:11:36.651335 containerd[1482]: time="2025-01-17T12:11:36.646318365Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:11:36.654459 containerd[1482]: time="2025-01-17T12:11:36.653351465Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:11:36.654459 containerd[1482]: time="2025-01-17T12:11:36.654165772Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:11:36.654798 containerd[1482]: time="2025-01-17T12:11:36.654760197Z" level=info msg="Start subscribing containerd event" Jan 17 12:11:36.654891 containerd[1482]: time="2025-01-17T12:11:36.654876555Z" level=info msg="Start recovering state" Jan 17 12:11:36.654983 containerd[1482]: time="2025-01-17T12:11:36.654938822Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:11:36.655057 containerd[1482]: time="2025-01-17T12:11:36.655033911Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:11:36.655131 containerd[1482]: time="2025-01-17T12:11:36.655115584Z" level=info msg="Start event monitor" Jan 17 12:11:36.655213 containerd[1482]: time="2025-01-17T12:11:36.655199752Z" level=info msg="Start snapshots syncer" Jan 17 12:11:36.655292 containerd[1482]: time="2025-01-17T12:11:36.655277818Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:11:36.655356 containerd[1482]: time="2025-01-17T12:11:36.655342499Z" level=info msg="Start streaming server" Jan 17 12:11:36.655546 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:11:36.659507 containerd[1482]: time="2025-01-17T12:11:36.659473517Z" level=info msg="containerd successfully booted in 0.082380s" Jan 17 12:11:36.879006 tar[1475]: linux-amd64/LICENSE Jan 17 12:11:36.879417 tar[1475]: linux-amd64/README.md Jan 17 12:11:36.893487 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:11:37.384756 systemd-networkd[1394]: eth0: Gained IPv6LL Jan 17 12:11:37.386240 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Jan 17 12:11:37.388905 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:11:37.397591 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:11:37.409157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:11:37.425285 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:11:37.469986 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:11:39.446858 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:11:39.457928 systemd[1]: Started sshd@0-172.24.4.139:22-172.24.4.1:46906.service - OpenSSH per-connection server daemon (172.24.4.1:46906). Jan 17 12:11:39.936897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:11:39.947519 (kubelet)[1568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:11:40.833251 sshd[1561]: Accepted publickey for core from 172.24.4.1 port 46906 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:11:40.841108 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:40.868667 systemd-logind[1464]: New session 1 of user core. Jan 17 12:11:40.871790 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:11:40.887296 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:11:40.913047 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:11:40.921898 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:11:40.946633 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:11:41.177666 systemd[1575]: Queued start job for default target default.target. Jan 17 12:11:41.189455 systemd[1575]: Created slice app.slice - User Application Slice. Jan 17 12:11:41.189605 systemd[1575]: Reached target paths.target - Paths. Jan 17 12:11:41.189699 systemd[1575]: Reached target timers.target - Timers. Jan 17 12:11:41.191141 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:11:41.214046 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:11:41.214315 systemd[1575]: Reached target sockets.target - Sockets. Jan 17 12:11:41.214358 systemd[1575]: Reached target basic.target - Basic System. Jan 17 12:11:41.214456 systemd[1575]: Reached target default.target - Main User Target. Jan 17 12:11:41.214497 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:11:41.214568 systemd[1575]: Startup finished in 260ms. Jan 17 12:11:41.224956 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:11:41.710027 login[1539]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:11:41.714854 login[1541]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:11:41.731970 systemd[1]: Started sshd@1-172.24.4.139:22-172.24.4.1:46920.service - OpenSSH per-connection server daemon (172.24.4.1:46920). Jan 17 12:11:41.740630 systemd-logind[1464]: New session 2 of user core. Jan 17 12:11:41.744712 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:11:41.753942 systemd-logind[1464]: New session 3 of user core. Jan 17 12:11:41.759865 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:11:42.957396 coreos-metadata[1447]: Jan 17 12:11:42.956 WARN failed to locate config-drive, using the metadata service API instead Jan 17 12:11:43.009263 coreos-metadata[1447]: Jan 17 12:11:43.009 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 17 12:11:43.096662 sshd[1590]: Accepted publickey for core from 172.24.4.1 port 46920 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:11:43.100837 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:43.110505 systemd-logind[1464]: New session 4 of user core. Jan 17 12:11:43.130486 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:11:43.277918 coreos-metadata[1447]: Jan 17 12:11:43.277 INFO Fetch successful Jan 17 12:11:43.278786 coreos-metadata[1447]: Jan 17 12:11:43.278 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 12:11:43.292518 coreos-metadata[1447]: Jan 17 12:11:43.292 INFO Fetch successful Jan 17 12:11:43.292518 coreos-metadata[1447]: Jan 17 12:11:43.292 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 17 12:11:43.310793 coreos-metadata[1447]: Jan 17 12:11:43.310 INFO Fetch successful Jan 17 12:11:43.310793 coreos-metadata[1447]: Jan 17 12:11:43.310 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 17 12:11:43.324219 coreos-metadata[1447]: Jan 17 12:11:43.323 INFO Fetch successful Jan 17 12:11:43.324627 coreos-metadata[1447]: Jan 17 12:11:43.324 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 17 12:11:43.334139 coreos-metadata[1447]: Jan 17 12:11:43.334 INFO Fetch successful Jan 17 12:11:43.334139 coreos-metadata[1447]: Jan 17 12:11:43.334 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 17 12:11:43.347012 coreos-metadata[1447]: Jan 17 12:11:43.346 INFO Fetch successful Jan 17 12:11:43.354145 kubelet[1568]: E0117 12:11:43.354028 1568 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:11:43.360015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:11:43.360369 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:11:43.361010 systemd[1]: kubelet.service: Consumed 2.426s CPU time. Jan 17 12:11:43.376640 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:11:43.377224 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:11:43.409238 coreos-metadata[1514]: Jan 17 12:11:43.409 WARN failed to locate config-drive, using the metadata service API instead Jan 17 12:11:43.426727 coreos-metadata[1514]: Jan 17 12:11:43.426 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 17 12:11:43.437336 coreos-metadata[1514]: Jan 17 12:11:43.436 INFO Fetch successful Jan 17 12:11:43.437336 coreos-metadata[1514]: Jan 17 12:11:43.436 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 12:11:43.447019 coreos-metadata[1514]: Jan 17 12:11:43.446 INFO Fetch successful Jan 17 12:11:43.451364 unknown[1514]: wrote ssh authorized keys file for user: core Jan 17 12:11:43.487715 update-ssh-keys[1628]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:11:43.488219 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:11:43.492968 systemd[1]: Finished sshkeys.service. Jan 17 12:11:43.496657 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:11:43.496983 systemd[1]: Startup finished in 1.290s (kernel) + 16.880s (initrd) + 11.854s (userspace) = 30.025s. Jan 17 12:11:43.661864 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:43.683315 systemd[1]: sshd@1-172.24.4.139:22-172.24.4.1:46920.service: Deactivated successfully. Jan 17 12:11:43.687983 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:11:43.693922 systemd-logind[1464]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:11:43.702204 systemd[1]: Started sshd@2-172.24.4.139:22-172.24.4.1:40140.service - OpenSSH per-connection server daemon (172.24.4.1:40140). Jan 17 12:11:43.705054 systemd-logind[1464]: Removed session 4. Jan 17 12:11:45.031612 sshd[1635]: Accepted publickey for core from 172.24.4.1 port 40140 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:11:45.034423 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:45.045797 systemd-logind[1464]: New session 5 of user core. Jan 17 12:11:45.055848 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:11:45.632360 sshd[1635]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:45.638465 systemd[1]: sshd@2-172.24.4.139:22-172.24.4.1:40140.service: Deactivated successfully. Jan 17 12:11:45.642879 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:11:45.646396 systemd-logind[1464]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:11:45.648984 systemd-logind[1464]: Removed session 5. Jan 17 12:11:53.552851 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:11:53.570999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:11:53.959849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:11:53.979131 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:11:54.077476 kubelet[1649]: E0117 12:11:54.077346 1649 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:11:54.085331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:11:54.085718 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:11:55.657041 systemd[1]: Started sshd@3-172.24.4.139:22-172.24.4.1:44138.service - OpenSSH per-connection server daemon (172.24.4.1:44138). Jan 17 12:11:56.818118 sshd[1658]: Accepted publickey for core from 172.24.4.1 port 44138 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:11:56.821125 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:56.831203 systemd-logind[1464]: New session 6 of user core. Jan 17 12:11:56.840890 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:11:57.445842 sshd[1658]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:57.457343 systemd[1]: sshd@3-172.24.4.139:22-172.24.4.1:44138.service: Deactivated successfully. Jan 17 12:11:57.461080 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:11:57.465911 systemd-logind[1464]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:11:57.477336 systemd[1]: Started sshd@4-172.24.4.139:22-172.24.4.1:44144.service - OpenSSH per-connection server daemon (172.24.4.1:44144). Jan 17 12:11:57.480383 systemd-logind[1464]: Removed session 6. Jan 17 12:11:58.817424 sshd[1665]: Accepted publickey for core from 172.24.4.1 port 44144 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:11:58.820333 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:58.830023 systemd-logind[1464]: New session 7 of user core. Jan 17 12:11:58.840894 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:11:59.388671 sshd[1665]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:59.401297 systemd[1]: sshd@4-172.24.4.139:22-172.24.4.1:44144.service: Deactivated successfully. Jan 17 12:11:59.404772 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:11:59.408793 systemd-logind[1464]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:11:59.416111 systemd[1]: Started sshd@5-172.24.4.139:22-172.24.4.1:44154.service - OpenSSH per-connection server daemon (172.24.4.1:44154). Jan 17 12:11:59.418385 systemd-logind[1464]: Removed session 7. Jan 17 12:12:00.751606 sshd[1672]: Accepted publickey for core from 172.24.4.1 port 44154 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:00.754029 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:00.768052 systemd-logind[1464]: New session 8 of user core. Jan 17 12:12:00.783861 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:12:01.378777 sshd[1672]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:01.390037 systemd[1]: sshd@5-172.24.4.139:22-172.24.4.1:44154.service: Deactivated successfully. Jan 17 12:12:01.395346 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:12:01.397273 systemd-logind[1464]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:12:01.410329 systemd[1]: Started sshd@6-172.24.4.139:22-172.24.4.1:44164.service - OpenSSH per-connection server daemon (172.24.4.1:44164). Jan 17 12:12:01.413220 systemd-logind[1464]: Removed session 8. Jan 17 12:12:02.696901 sshd[1679]: Accepted publickey for core from 172.24.4.1 port 44164 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:02.700087 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:02.712588 systemd-logind[1464]: New session 9 of user core. Jan 17 12:12:02.717868 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:12:03.211681 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:12:03.212341 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:12:03.240152 sudo[1682]: pam_unix(sudo:session): session closed for user root Jan 17 12:12:03.440882 sshd[1679]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:03.457363 systemd[1]: sshd@6-172.24.4.139:22-172.24.4.1:44164.service: Deactivated successfully. Jan 17 12:12:03.461467 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:12:03.468226 systemd-logind[1464]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:12:03.477720 systemd[1]: Started sshd@7-172.24.4.139:22-172.24.4.1:54820.service - OpenSSH per-connection server daemon (172.24.4.1:54820). Jan 17 12:12:03.480109 systemd-logind[1464]: Removed session 9. Jan 17 12:12:04.302880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:12:04.312915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:12:04.540893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:12:04.555384 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:12:04.576978 sshd[1687]: Accepted publickey for core from 172.24.4.1 port 54820 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:04.580198 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:04.593639 systemd-logind[1464]: New session 10 of user core. Jan 17 12:12:04.598817 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:12:04.610878 kubelet[1697]: E0117 12:12:04.610833 1697 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:12:04.614599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:12:04.614922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:12:05.089774 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:12:05.090415 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:12:05.098887 sudo[1707]: pam_unix(sudo:session): session closed for user root Jan 17 12:12:05.112358 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:12:05.113157 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:12:05.138302 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:12:05.161594 auditctl[1710]: No rules Jan 17 12:12:05.162607 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:12:05.163162 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:12:05.175384 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:12:05.244603 augenrules[1728]: No rules Jan 17 12:12:05.246070 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:12:05.248987 sudo[1706]: pam_unix(sudo:session): session closed for user root Jan 17 12:12:05.397913 sshd[1687]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:05.410616 systemd[1]: sshd@7-172.24.4.139:22-172.24.4.1:54820.service: Deactivated successfully. Jan 17 12:12:05.415897 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:12:05.417850 systemd-logind[1464]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:12:05.424151 systemd[1]: Started sshd@8-172.24.4.139:22-172.24.4.1:54830.service - OpenSSH per-connection server daemon (172.24.4.1:54830). Jan 17 12:12:05.427144 systemd-logind[1464]: Removed session 10. Jan 17 12:12:06.442147 sshd[1736]: Accepted publickey for core from 172.24.4.1 port 54830 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:06.445341 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:06.454659 systemd-logind[1464]: New session 11 of user core. Jan 17 12:12:06.463821 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:12:06.858944 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:12:06.859788 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:12:07.476956 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:12:07.499405 (dockerd)[1755]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:12:08.189706 systemd-timesyncd[1367]: Contacted time server 82.65.235.151:123 (2.flatcar.pool.ntp.org). Jan 17 12:12:08.189780 systemd-timesyncd[1367]: Initial clock synchronization to Fri 2025-01-17 12:12:08.189368 UTC. Jan 17 12:12:08.189864 systemd-resolved[1332]: Clock change detected. Flushing caches. Jan 17 12:12:08.697197 dockerd[1755]: time="2025-01-17T12:12:08.696393277Z" level=info msg="Starting up" Jan 17 12:12:08.828732 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1785976407-merged.mount: Deactivated successfully. Jan 17 12:12:08.860594 systemd[1]: var-lib-docker-metacopy\x2dcheck1964151826-merged.mount: Deactivated successfully. Jan 17 12:12:08.902479 dockerd[1755]: time="2025-01-17T12:12:08.902430419Z" level=info msg="Loading containers: start." Jan 17 12:12:09.033315 kernel: Initializing XFRM netlink socket Jan 17 12:12:09.163028 systemd-networkd[1394]: docker0: Link UP Jan 17 12:12:09.187230 dockerd[1755]: time="2025-01-17T12:12:09.187146003Z" level=info msg="Loading containers: done." Jan 17 12:12:09.233400 dockerd[1755]: time="2025-01-17T12:12:09.233317518Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:12:09.234377 dockerd[1755]: time="2025-01-17T12:12:09.233950004Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:12:09.234377 dockerd[1755]: time="2025-01-17T12:12:09.234197358Z" level=info msg="Daemon has completed initialization" Jan 17 12:12:09.304339 dockerd[1755]: time="2025-01-17T12:12:09.303984925Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:12:09.304757 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:12:09.828517 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck240672466-merged.mount: Deactivated successfully. Jan 17 12:12:11.056955 containerd[1482]: time="2025-01-17T12:12:11.056770534Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 17 12:12:11.827647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272770438.mount: Deactivated successfully. Jan 17 12:12:13.906035 containerd[1482]: time="2025-01-17T12:12:13.905868827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:13.908456 containerd[1482]: time="2025-01-17T12:12:13.908419290Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 17 12:12:13.911907 containerd[1482]: time="2025-01-17T12:12:13.911474460Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:13.915169 containerd[1482]: time="2025-01-17T12:12:13.915133823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:13.916864 containerd[1482]: time="2025-01-17T12:12:13.916836046Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.860007564s" Jan 17 12:12:13.917447 containerd[1482]: time="2025-01-17T12:12:13.917426483Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 17 12:12:13.939455 containerd[1482]: time="2025-01-17T12:12:13.939425162Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 17 12:12:15.341380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:12:15.349862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:12:15.507443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:12:15.508410 (kubelet)[1967]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:12:15.573827 kubelet[1967]: E0117 12:12:15.573568 1967 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:12:15.576953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:12:15.577110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:12:16.313800 containerd[1482]: time="2025-01-17T12:12:16.313735340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:16.314966 containerd[1482]: time="2025-01-17T12:12:16.314924590Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 17 12:12:16.316282 containerd[1482]: time="2025-01-17T12:12:16.316223727Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:16.321276 containerd[1482]: time="2025-01-17T12:12:16.320770334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:16.321916 containerd[1482]: time="2025-01-17T12:12:16.321540138Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.381909902s" Jan 17 12:12:16.321916 containerd[1482]: time="2025-01-17T12:12:16.321568180Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 17 12:12:16.345256 containerd[1482]: time="2025-01-17T12:12:16.345214209Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 17 12:12:18.740097 containerd[1482]: time="2025-01-17T12:12:18.740044813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:18.741998 containerd[1482]: time="2025-01-17T12:12:18.741954314Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 17 12:12:18.742483 containerd[1482]: time="2025-01-17T12:12:18.742438492Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:18.745985 containerd[1482]: time="2025-01-17T12:12:18.745938797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:18.747627 containerd[1482]: time="2025-01-17T12:12:18.747045833Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 2.40163738s" Jan 17 12:12:18.747627 containerd[1482]: time="2025-01-17T12:12:18.747086960Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 17 12:12:18.770304 containerd[1482]: time="2025-01-17T12:12:18.770268898Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 17 12:12:20.180560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540941359.mount: Deactivated successfully. Jan 17 12:12:20.971942 containerd[1482]: time="2025-01-17T12:12:20.971823774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:20.973737 containerd[1482]: time="2025-01-17T12:12:20.973516679Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 17 12:12:20.974931 containerd[1482]: time="2025-01-17T12:12:20.974892299Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:20.977823 containerd[1482]: time="2025-01-17T12:12:20.977764636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:20.978578 containerd[1482]: time="2025-01-17T12:12:20.978524721Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.208221199s" Jan 17 12:12:20.978637 containerd[1482]: time="2025-01-17T12:12:20.978577901Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 17 12:12:21.004995 containerd[1482]: time="2025-01-17T12:12:21.004945764Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:12:21.635697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045310393.mount: Deactivated successfully. Jan 17 12:12:22.022531 update_engine[1465]: I20250117 12:12:22.022363 1465 update_attempter.cc:509] Updating boot flags... Jan 17 12:12:22.071770 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2020) Jan 17 12:12:22.139432 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2021) Jan 17 12:12:22.222411 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2021) Jan 17 12:12:22.970476 containerd[1482]: time="2025-01-17T12:12:22.970394803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:22.971733 containerd[1482]: time="2025-01-17T12:12:22.971655598Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 17 12:12:22.972770 containerd[1482]: time="2025-01-17T12:12:22.972724042Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:22.977089 containerd[1482]: time="2025-01-17T12:12:22.977061026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:22.978527 containerd[1482]: time="2025-01-17T12:12:22.978453607Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.973462148s" Jan 17 12:12:22.978590 containerd[1482]: time="2025-01-17T12:12:22.978530742Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:12:23.004644 containerd[1482]: time="2025-01-17T12:12:23.004599774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:12:23.786509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119315051.mount: Deactivated successfully. Jan 17 12:12:23.793675 containerd[1482]: time="2025-01-17T12:12:23.793580297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:23.795649 containerd[1482]: time="2025-01-17T12:12:23.795559589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 17 12:12:23.797600 containerd[1482]: time="2025-01-17T12:12:23.797409840Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:23.802794 containerd[1482]: time="2025-01-17T12:12:23.802726281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:23.805838 containerd[1482]: time="2025-01-17T12:12:23.805446001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 800.78952ms" Jan 17 12:12:23.805838 containerd[1482]: time="2025-01-17T12:12:23.805524028Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:12:23.848542 containerd[1482]: time="2025-01-17T12:12:23.848459132Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 17 12:12:24.546638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3293164681.mount: Deactivated successfully. Jan 17 12:12:25.590854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 12:12:25.599443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:12:25.761501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:12:25.769804 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:12:25.827577 kubelet[2121]: E0117 12:12:25.827528 2121 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:12:25.830147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:12:25.830331 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:12:27.771352 containerd[1482]: time="2025-01-17T12:12:27.769589111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:27.772624 containerd[1482]: time="2025-01-17T12:12:27.771799496Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 17 12:12:27.777297 containerd[1482]: time="2025-01-17T12:12:27.776490885Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:27.803769 containerd[1482]: time="2025-01-17T12:12:27.803616219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:27.806789 containerd[1482]: time="2025-01-17T12:12:27.806408425Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.957865956s" Jan 17 12:12:27.806789 containerd[1482]: time="2025-01-17T12:12:27.806510898Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 17 12:12:31.798050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:12:31.811807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:12:31.853032 systemd[1]: Reloading requested from client PID 2201 ('systemctl') (unit session-11.scope)... Jan 17 12:12:31.853094 systemd[1]: Reloading... Jan 17 12:12:31.967508 zram_generator::config[2240]: No configuration found. Jan 17 12:12:32.116602 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:12:32.198757 systemd[1]: Reloading finished in 344 ms. Jan 17 12:12:32.248385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:12:32.248701 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:12:32.253615 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:12:32.253969 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:12:32.254501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:12:32.260504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:12:32.371839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:12:32.388735 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:12:32.692691 kubelet[2315]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:12:32.692691 kubelet[2315]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:12:32.692691 kubelet[2315]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:12:32.692691 kubelet[2315]: I0117 12:12:32.687725 2315 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:12:33.981971 kubelet[2315]: I0117 12:12:33.981888 2315 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:12:33.981971 kubelet[2315]: I0117 12:12:33.981937 2315 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:12:33.982945 kubelet[2315]: I0117 12:12:33.982268 2315 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:12:34.200486 kubelet[2315]: I0117 12:12:34.200404 2315 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:12:34.203039 kubelet[2315]: E0117 12:12:34.202787 2315 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:34.234864 kubelet[2315]: I0117 12:12:34.234160 2315 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:12:34.239892 kubelet[2315]: I0117 12:12:34.239789 2315 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:12:34.240370 kubelet[2315]: I0117 12:12:34.239878 2315 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-e-f0dad07f0f.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:12:34.240579 kubelet[2315]: I0117 12:12:34.240390 2315 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:12:34.240579 kubelet[2315]: I0117 12:12:34.240416 2315 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:12:34.240831 kubelet[2315]: I0117 12:12:34.240661 2315 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:12:34.242923 kubelet[2315]: I0117 12:12:34.242844 2315 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:12:34.243551 kubelet[2315]: I0117 12:12:34.243094 2315 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:12:34.243551 kubelet[2315]: I0117 12:12:34.243158 2315 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:12:34.243551 kubelet[2315]: I0117 12:12:34.243188 2315 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:12:34.252958 kubelet[2315]: W0117 12:12:34.252814 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-f0dad07f0f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:34.253924 kubelet[2315]: E0117 12:12:34.253224 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-f0dad07f0f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:34.254053 kubelet[2315]: I0117 12:12:34.253939 2315 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:12:34.260301 kubelet[2315]: I0117 12:12:34.258674 2315 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:12:34.260301 kubelet[2315]: W0117 12:12:34.258801 2315 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:12:34.260301 kubelet[2315]: I0117 12:12:34.260083 2315 server.go:1264] "Started kubelet" Jan 17 12:12:34.260559 kubelet[2315]: W0117 12:12:34.260408 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:34.260559 kubelet[2315]: E0117 12:12:34.260507 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:34.264048 kubelet[2315]: I0117 12:12:34.263984 2315 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:12:34.272750 kubelet[2315]: I0117 12:12:34.272664 2315 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:12:34.277169 kubelet[2315]: I0117 12:12:34.277119 2315 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:12:34.281834 kubelet[2315]: I0117 12:12:34.273487 2315 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:12:34.282491 kubelet[2315]: I0117 12:12:34.282452 2315 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:12:34.286013 kubelet[2315]: E0117 12:12:34.285969 2315 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:12:34.289011 kubelet[2315]: I0117 12:12:34.288962 2315 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:12:34.289908 kubelet[2315]: I0117 12:12:34.289872 2315 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:12:34.290029 kubelet[2315]: I0117 12:12:34.289954 2315 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:12:34.291314 kubelet[2315]: W0117 12:12:34.291164 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:34.291314 kubelet[2315]: E0117 12:12:34.291257 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:34.292818 kubelet[2315]: I0117 12:12:34.291627 2315 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:12:34.292818 kubelet[2315]: E0117 12:12:34.291323 2315 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.139:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.139:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-e-f0dad07f0f.novalocal.181b79c1bb554ab1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-f0dad07f0f.novalocal,UID:ci-4081-3-0-e-f0dad07f0f.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-f0dad07f0f.novalocal,},FirstTimestamp:2025-01-17 12:12:34.260036273 +0000 UTC m=+1.867586629,LastTimestamp:2025-01-17 12:12:34.260036273 +0000 UTC m=+1.867586629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-f0dad07f0f.novalocal,}" Jan 17 12:12:34.292818 kubelet[2315]: I0117 12:12:34.291770 2315 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:12:34.292818 kubelet[2315]: E0117 12:12:34.291782 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-f0dad07f0f.novalocal?timeout=10s\": dial tcp 172.24.4.139:6443: connect: connection refused" interval="200ms" Jan 17 12:12:34.294574 kubelet[2315]: I0117 12:12:34.294491 2315 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:12:34.311106 kubelet[2315]: I0117 12:12:34.311044 2315 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:12:34.312343 kubelet[2315]: I0117 12:12:34.312317 2315 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:12:34.312430 kubelet[2315]: I0117 12:12:34.312367 2315 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:12:34.312430 kubelet[2315]: I0117 12:12:34.312397 2315 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:12:34.312492 kubelet[2315]: E0117 12:12:34.312459 2315 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:12:34.319821 kubelet[2315]: W0117 12:12:34.319288 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:34.319821 kubelet[2315]: E0117 12:12:34.319373 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:34.321953 kubelet[2315]: I0117 12:12:34.321934 2315 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:12:34.322047 kubelet[2315]: I0117 12:12:34.322035 2315 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:12:34.322151 kubelet[2315]: I0117 12:12:34.322141 2315 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:12:34.328325 kubelet[2315]: I0117 12:12:34.328299 2315 policy_none.go:49] "None policy: Start" Jan 17 12:12:34.329038 kubelet[2315]: I0117 12:12:34.328955 2315 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:12:34.329038 kubelet[2315]: I0117 12:12:34.329010 2315 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:12:34.335615 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:12:34.351139 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:12:34.361661 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:12:34.363219 kubelet[2315]: I0117 12:12:34.363187 2315 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:12:34.365062 kubelet[2315]: I0117 12:12:34.364073 2315 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:12:34.365062 kubelet[2315]: I0117 12:12:34.364215 2315 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:12:34.365911 kubelet[2315]: E0117 12:12:34.365891 2315 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" not found" Jan 17 12:12:34.391859 kubelet[2315]: I0117 12:12:34.391484 2315 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.392125 kubelet[2315]: E0117 12:12:34.392101 2315 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.139:6443/api/v1/nodes\": dial tcp 172.24.4.139:6443: connect: connection refused" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.412734 kubelet[2315]: I0117 12:12:34.412645 2315 topology_manager.go:215] "Topology Admit Handler" podUID="914d736ec9051aa2d54dbb1c3ba555e0" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.416164 kubelet[2315]: I0117 12:12:34.415917 2315 topology_manager.go:215] "Topology Admit Handler" podUID="199ec99043c7c01d22e26b52d7925b09" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.418557 kubelet[2315]: I0117 12:12:34.417591 2315 topology_manager.go:215] "Topology Admit Handler" podUID="61ec9d271c3f1b410b0148fe1a8e5291" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.436668 systemd[1]: Created slice kubepods-burstable-pod199ec99043c7c01d22e26b52d7925b09.slice - libcontainer container kubepods-burstable-pod199ec99043c7c01d22e26b52d7925b09.slice. Jan 17 12:12:34.459170 systemd[1]: Created slice kubepods-burstable-pod914d736ec9051aa2d54dbb1c3ba555e0.slice - libcontainer container kubepods-burstable-pod914d736ec9051aa2d54dbb1c3ba555e0.slice. Jan 17 12:12:34.475477 systemd[1]: Created slice kubepods-burstable-pod61ec9d271c3f1b410b0148fe1a8e5291.slice - libcontainer container kubepods-burstable-pod61ec9d271c3f1b410b0148fe1a8e5291.slice. Jan 17 12:12:34.491143 kubelet[2315]: I0117 12:12:34.490792 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/199ec99043c7c01d22e26b52d7925b09-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"199ec99043c7c01d22e26b52d7925b09\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.491143 kubelet[2315]: I0117 12:12:34.490831 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61ec9d271c3f1b410b0148fe1a8e5291-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"61ec9d271c3f1b410b0148fe1a8e5291\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.491143 kubelet[2315]: I0117 12:12:34.490857 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/61ec9d271c3f1b410b0148fe1a8e5291-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"61ec9d271c3f1b410b0148fe1a8e5291\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.491143 kubelet[2315]: I0117 12:12:34.490878 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61ec9d271c3f1b410b0148fe1a8e5291-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"61ec9d271c3f1b410b0148fe1a8e5291\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.491356 kubelet[2315]: I0117 12:12:34.490897 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61ec9d271c3f1b410b0148fe1a8e5291-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"61ec9d271c3f1b410b0148fe1a8e5291\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.491356 kubelet[2315]: I0117 12:12:34.490917 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61ec9d271c3f1b410b0148fe1a8e5291-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"61ec9d271c3f1b410b0148fe1a8e5291\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.491356 kubelet[2315]: I0117 12:12:34.490940 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/914d736ec9051aa2d54dbb1c3ba555e0-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"914d736ec9051aa2d54dbb1c3ba555e0\") " pod="kube-system/kube-scheduler-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.491356 kubelet[2315]: I0117 12:12:34.490967 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/199ec99043c7c01d22e26b52d7925b09-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"199ec99043c7c01d22e26b52d7925b09\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.491471 kubelet[2315]: I0117 12:12:34.490986 2315 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/199ec99043c7c01d22e26b52d7925b09-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"199ec99043c7c01d22e26b52d7925b09\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.492481 kubelet[2315]: E0117 12:12:34.492201 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-f0dad07f0f.novalocal?timeout=10s\": dial tcp 172.24.4.139:6443: connect: connection refused" interval="400ms" Jan 17 12:12:34.596971 kubelet[2315]: I0117 12:12:34.596665 2315 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.597638 kubelet[2315]: E0117 12:12:34.597557 2315 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.139:6443/api/v1/nodes\": dial tcp 172.24.4.139:6443: connect: connection refused" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:34.757864 containerd[1482]: time="2025-01-17T12:12:34.757665346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal,Uid:199ec99043c7c01d22e26b52d7925b09,Namespace:kube-system,Attempt:0,}" Jan 17 12:12:34.764655 containerd[1482]: time="2025-01-17T12:12:34.764006158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-e-f0dad07f0f.novalocal,Uid:914d736ec9051aa2d54dbb1c3ba555e0,Namespace:kube-system,Attempt:0,}" Jan 17 12:12:34.786063 containerd[1482]: time="2025-01-17T12:12:34.785525768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal,Uid:61ec9d271c3f1b410b0148fe1a8e5291,Namespace:kube-system,Attempt:0,}" Jan 17 12:12:34.893828 kubelet[2315]: E0117 12:12:34.893737 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-f0dad07f0f.novalocal?timeout=10s\": dial tcp 172.24.4.139:6443: connect: connection refused" interval="800ms" Jan 17 12:12:35.001650 kubelet[2315]: I0117 12:12:35.001533 2315 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:35.002860 kubelet[2315]: E0117 12:12:35.002319 2315 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.139:6443/api/v1/nodes\": dial tcp 172.24.4.139:6443: connect: connection refused" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:35.267395 kubelet[2315]: W0117 12:12:35.267192 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-f0dad07f0f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:35.267861 kubelet[2315]: E0117 12:12:35.267762 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-f0dad07f0f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:35.371370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446458596.mount: Deactivated successfully. Jan 17 12:12:35.380086 containerd[1482]: time="2025-01-17T12:12:35.379844823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:12:35.385018 containerd[1482]: time="2025-01-17T12:12:35.384767857Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 17 12:12:35.386072 kubelet[2315]: W0117 12:12:35.385920 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:35.386464 kubelet[2315]: E0117 12:12:35.386096 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:35.387485 containerd[1482]: time="2025-01-17T12:12:35.386849511Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:12:35.389365 containerd[1482]: time="2025-01-17T12:12:35.389164002Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:12:35.391063 containerd[1482]: time="2025-01-17T12:12:35.390824536Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:12:35.393977 containerd[1482]: time="2025-01-17T12:12:35.393892840Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:12:35.396124 containerd[1482]: time="2025-01-17T12:12:35.396003118Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:12:35.400657 containerd[1482]: time="2025-01-17T12:12:35.400501395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:12:35.407997 containerd[1482]: time="2025-01-17T12:12:35.407026573Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 648.279027ms" Jan 17 12:12:35.411371 containerd[1482]: time="2025-01-17T12:12:35.411309525Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 625.628165ms" Jan 17 12:12:35.417679 containerd[1482]: time="2025-01-17T12:12:35.417582530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 653.376878ms" Jan 17 12:12:35.527211 kubelet[2315]: W0117 12:12:35.526855 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:35.527211 kubelet[2315]: E0117 12:12:35.526990 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:35.647787 containerd[1482]: time="2025-01-17T12:12:35.647127161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:12:35.647787 containerd[1482]: time="2025-01-17T12:12:35.647377110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:12:35.647787 containerd[1482]: time="2025-01-17T12:12:35.647410032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:35.647787 containerd[1482]: time="2025-01-17T12:12:35.647647597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:35.649173 containerd[1482]: time="2025-01-17T12:12:35.649065116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:12:35.651509 containerd[1482]: time="2025-01-17T12:12:35.651092759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:12:35.651509 containerd[1482]: time="2025-01-17T12:12:35.651144776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:35.651509 containerd[1482]: time="2025-01-17T12:12:35.651281333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:35.653283 kubelet[2315]: W0117 12:12:35.652951 2315 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:35.653283 kubelet[2315]: E0117 12:12:35.653000 2315 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 17 12:12:35.657382 containerd[1482]: time="2025-01-17T12:12:35.650781916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:12:35.657382 containerd[1482]: time="2025-01-17T12:12:35.657321681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:12:35.657382 containerd[1482]: time="2025-01-17T12:12:35.657351257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:35.657648 containerd[1482]: time="2025-01-17T12:12:35.657595555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:35.689494 systemd[1]: Started cri-containerd-48059d6f30be35fe5b417c60f93fe623cda12e4b92c852642f9970eb209920ef.scope - libcontainer container 48059d6f30be35fe5b417c60f93fe623cda12e4b92c852642f9970eb209920ef. Jan 17 12:12:35.692125 systemd[1]: Started cri-containerd-ac874230e49755af4d7532ca514faf1207eecd3d23017be4a2c8b8a5752f0d7f.scope - libcontainer container ac874230e49755af4d7532ca514faf1207eecd3d23017be4a2c8b8a5752f0d7f. Jan 17 12:12:35.694669 systemd[1]: Started cri-containerd-fd24649ff40b7edb69af52fd8889c1cf17eeb2c67706d153f17cf678a137c854.scope - libcontainer container fd24649ff40b7edb69af52fd8889c1cf17eeb2c67706d153f17cf678a137c854. Jan 17 12:12:35.696086 kubelet[2315]: E0117 12:12:35.695715 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-f0dad07f0f.novalocal?timeout=10s\": dial tcp 172.24.4.139:6443: connect: connection refused" interval="1.6s" Jan 17 12:12:35.771139 containerd[1482]: time="2025-01-17T12:12:35.770980899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal,Uid:61ec9d271c3f1b410b0148fe1a8e5291,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd24649ff40b7edb69af52fd8889c1cf17eeb2c67706d153f17cf678a137c854\"" Jan 17 12:12:35.781716 containerd[1482]: time="2025-01-17T12:12:35.780346524Z" level=info msg="CreateContainer within sandbox \"fd24649ff40b7edb69af52fd8889c1cf17eeb2c67706d153f17cf678a137c854\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:12:35.785020 containerd[1482]: time="2025-01-17T12:12:35.784978011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal,Uid:199ec99043c7c01d22e26b52d7925b09,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac874230e49755af4d7532ca514faf1207eecd3d23017be4a2c8b8a5752f0d7f\"" Jan 17 12:12:35.790775 containerd[1482]: time="2025-01-17T12:12:35.790599824Z" level=info msg="CreateContainer within sandbox \"ac874230e49755af4d7532ca514faf1207eecd3d23017be4a2c8b8a5752f0d7f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:12:35.799933 containerd[1482]: time="2025-01-17T12:12:35.799805990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-e-f0dad07f0f.novalocal,Uid:914d736ec9051aa2d54dbb1c3ba555e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"48059d6f30be35fe5b417c60f93fe623cda12e4b92c852642f9970eb209920ef\"" Jan 17 12:12:35.805207 containerd[1482]: time="2025-01-17T12:12:35.804617384Z" level=info msg="CreateContainer within sandbox \"48059d6f30be35fe5b417c60f93fe623cda12e4b92c852642f9970eb209920ef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:12:35.805334 kubelet[2315]: I0117 12:12:35.804778 2315 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:35.805334 kubelet[2315]: E0117 12:12:35.805163 2315 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.139:6443/api/v1/nodes\": dial tcp 172.24.4.139:6443: connect: connection refused" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:35.825500 containerd[1482]: time="2025-01-17T12:12:35.825455807Z" level=info msg="CreateContainer within sandbox \"fd24649ff40b7edb69af52fd8889c1cf17eeb2c67706d153f17cf678a137c854\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e96146744772f7bd59eedc309472cafe282bbbad960427bc38d5f798beb871b\"" Jan 17 12:12:35.826537 containerd[1482]: time="2025-01-17T12:12:35.826350394Z" level=info msg="StartContainer for \"6e96146744772f7bd59eedc309472cafe282bbbad960427bc38d5f798beb871b\"" Jan 17 12:12:35.850028 containerd[1482]: time="2025-01-17T12:12:35.849855709Z" level=info msg="CreateContainer within sandbox \"ac874230e49755af4d7532ca514faf1207eecd3d23017be4a2c8b8a5752f0d7f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8e9fb0df88141bbb0680df84fb365d69fedb1f71cde100676d7aaab467967856\"" Jan 17 12:12:35.851254 containerd[1482]: time="2025-01-17T12:12:35.851214397Z" level=info msg="StartContainer for \"8e9fb0df88141bbb0680df84fb365d69fedb1f71cde100676d7aaab467967856\"" Jan 17 12:12:35.853826 containerd[1482]: time="2025-01-17T12:12:35.853706501Z" level=info msg="CreateContainer within sandbox \"48059d6f30be35fe5b417c60f93fe623cda12e4b92c852642f9970eb209920ef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ff68c20e82caff3651baf25e7d907a793d8a42e2ba5d4b28f29891d99f8e2cd4\"" Jan 17 12:12:35.854576 containerd[1482]: time="2025-01-17T12:12:35.854446098Z" level=info msg="StartContainer for \"ff68c20e82caff3651baf25e7d907a793d8a42e2ba5d4b28f29891d99f8e2cd4\"" Jan 17 12:12:35.863586 systemd[1]: Started cri-containerd-6e96146744772f7bd59eedc309472cafe282bbbad960427bc38d5f798beb871b.scope - libcontainer container 6e96146744772f7bd59eedc309472cafe282bbbad960427bc38d5f798beb871b. Jan 17 12:12:35.908646 systemd[1]: Started cri-containerd-8e9fb0df88141bbb0680df84fb365d69fedb1f71cde100676d7aaab467967856.scope - libcontainer container 8e9fb0df88141bbb0680df84fb365d69fedb1f71cde100676d7aaab467967856. Jan 17 12:12:35.910644 systemd[1]: Started cri-containerd-ff68c20e82caff3651baf25e7d907a793d8a42e2ba5d4b28f29891d99f8e2cd4.scope - libcontainer container ff68c20e82caff3651baf25e7d907a793d8a42e2ba5d4b28f29891d99f8e2cd4. Jan 17 12:12:35.937856 containerd[1482]: time="2025-01-17T12:12:35.937789408Z" level=info msg="StartContainer for \"6e96146744772f7bd59eedc309472cafe282bbbad960427bc38d5f798beb871b\" returns successfully" Jan 17 12:12:35.984421 containerd[1482]: time="2025-01-17T12:12:35.984282817Z" level=info msg="StartContainer for \"8e9fb0df88141bbb0680df84fb365d69fedb1f71cde100676d7aaab467967856\" returns successfully" Jan 17 12:12:36.006823 containerd[1482]: time="2025-01-17T12:12:36.006765202Z" level=info msg="StartContainer for \"ff68c20e82caff3651baf25e7d907a793d8a42e2ba5d4b28f29891d99f8e2cd4\" returns successfully" Jan 17 12:12:37.413298 kubelet[2315]: I0117 12:12:37.412712 2315 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:37.974666 kubelet[2315]: E0117 12:12:37.974513 2315 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-e-f0dad07f0f.novalocal\" not found" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:38.156757 kubelet[2315]: I0117 12:12:38.156716 2315 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:38.247287 kubelet[2315]: I0117 12:12:38.246298 2315 apiserver.go:52] "Watching apiserver" Jan 17 12:12:38.290339 kubelet[2315]: I0117 12:12:38.290296 2315 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:12:38.378697 kubelet[2315]: E0117 12:12:38.378161 2315 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:40.631212 systemd[1]: Reloading requested from client PID 2592 ('systemctl') (unit session-11.scope)... Jan 17 12:12:40.632689 systemd[1]: Reloading... Jan 17 12:12:40.763418 zram_generator::config[2637]: No configuration found. Jan 17 12:12:40.913495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:12:41.021289 systemd[1]: Reloading finished in 387 ms. Jan 17 12:12:41.066315 kubelet[2315]: I0117 12:12:41.066149 2315 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:12:41.068323 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:12:41.078575 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:12:41.078790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:12:41.078849 systemd[1]: kubelet.service: Consumed 2.043s CPU time, 116.3M memory peak, 0B memory swap peak. Jan 17 12:12:41.085563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:12:41.324546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:12:41.328301 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:12:41.433269 kubelet[2695]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:12:41.433269 kubelet[2695]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:12:41.433269 kubelet[2695]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:12:41.434064 kubelet[2695]: I0117 12:12:41.433325 2695 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:12:41.440622 kubelet[2695]: I0117 12:12:41.440562 2695 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:12:41.440622 kubelet[2695]: I0117 12:12:41.440593 2695 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:12:41.440945 kubelet[2695]: I0117 12:12:41.440897 2695 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:12:41.444030 kubelet[2695]: I0117 12:12:41.442840 2695 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:12:41.450780 kubelet[2695]: I0117 12:12:41.444901 2695 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:12:41.464060 kubelet[2695]: I0117 12:12:41.464016 2695 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:12:41.464290 kubelet[2695]: I0117 12:12:41.464254 2695 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:12:41.464684 kubelet[2695]: I0117 12:12:41.464291 2695 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-e-f0dad07f0f.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:12:41.464797 kubelet[2695]: I0117 12:12:41.464697 2695 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:12:41.464797 kubelet[2695]: I0117 12:12:41.464715 2695 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:12:41.464797 kubelet[2695]: I0117 12:12:41.464755 2695 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:12:41.464944 kubelet[2695]: I0117 12:12:41.464872 2695 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:12:41.464944 kubelet[2695]: I0117 12:12:41.464891 2695 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:12:41.464944 kubelet[2695]: I0117 12:12:41.464917 2695 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:12:41.466596 kubelet[2695]: I0117 12:12:41.464949 2695 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:12:41.468490 kubelet[2695]: I0117 12:12:41.468472 2695 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:12:41.468777 kubelet[2695]: I0117 12:12:41.468763 2695 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:12:41.469295 kubelet[2695]: I0117 12:12:41.469282 2695 server.go:1264] "Started kubelet" Jan 17 12:12:41.472129 kubelet[2695]: I0117 12:12:41.472096 2695 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:12:41.476216 kubelet[2695]: I0117 12:12:41.476063 2695 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:12:41.478665 kubelet[2695]: I0117 12:12:41.478602 2695 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:12:41.479317 kubelet[2695]: I0117 12:12:41.479290 2695 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:12:41.483200 kubelet[2695]: I0117 12:12:41.483175 2695 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:12:41.488771 kubelet[2695]: I0117 12:12:41.488741 2695 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:12:41.488988 kubelet[2695]: I0117 12:12:41.488969 2695 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:12:41.493122 kubelet[2695]: I0117 12:12:41.493088 2695 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:12:41.494301 kubelet[2695]: I0117 12:12:41.494087 2695 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:12:41.494489 kubelet[2695]: I0117 12:12:41.494467 2695 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:12:41.509563 kubelet[2695]: I0117 12:12:41.509524 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:12:41.510956 kubelet[2695]: I0117 12:12:41.510942 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:12:41.511043 kubelet[2695]: I0117 12:12:41.511033 2695 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:12:41.511113 kubelet[2695]: I0117 12:12:41.511104 2695 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:12:41.511209 kubelet[2695]: E0117 12:12:41.511191 2695 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:12:41.513772 kubelet[2695]: I0117 12:12:41.513571 2695 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:12:41.518470 kubelet[2695]: E0117 12:12:41.518450 2695 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:12:41.565373 kubelet[2695]: I0117 12:12:41.565338 2695 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:12:41.565575 kubelet[2695]: I0117 12:12:41.565563 2695 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:12:41.565668 kubelet[2695]: I0117 12:12:41.565659 2695 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:12:41.565974 kubelet[2695]: I0117 12:12:41.565940 2695 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:12:41.566428 kubelet[2695]: I0117 12:12:41.566282 2695 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:12:41.566428 kubelet[2695]: I0117 12:12:41.566314 2695 policy_none.go:49] "None policy: Start" Jan 17 12:12:41.567657 kubelet[2695]: I0117 12:12:41.567266 2695 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:12:41.567657 kubelet[2695]: I0117 12:12:41.567291 2695 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:12:41.567657 kubelet[2695]: I0117 12:12:41.567409 2695 state_mem.go:75] "Updated machine memory state" Jan 17 12:12:41.574089 kubelet[2695]: I0117 12:12:41.574067 2695 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:12:41.574661 kubelet[2695]: I0117 12:12:41.574630 2695 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:12:41.575230 kubelet[2695]: I0117 12:12:41.574812 2695 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:12:41.596589 kubelet[2695]: I0117 12:12:41.596554 2695 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.609923 sudo[2727]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 12:12:41.610607 sudo[2727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 12:12:41.612051 kubelet[2695]: I0117 12:12:41.611664 2695 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.612051 kubelet[2695]: I0117 12:12:41.611726 2695 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.612188 kubelet[2695]: I0117 12:12:41.612163 2695 topology_manager.go:215] "Topology Admit Handler" podUID="914d736ec9051aa2d54dbb1c3ba555e0" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.612680 kubelet[2695]: I0117 12:12:41.612665 2695 topology_manager.go:215] "Topology Admit Handler" podUID="199ec99043c7c01d22e26b52d7925b09" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.612816 kubelet[2695]: I0117 12:12:41.612803 2695 topology_manager.go:215] "Topology Admit Handler" podUID="61ec9d271c3f1b410b0148fe1a8e5291" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.621911 kubelet[2695]: W0117 12:12:41.621794 2695 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:12:41.622905 kubelet[2695]: W0117 12:12:41.622768 2695 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:12:41.625461 kubelet[2695]: W0117 12:12:41.625408 2695 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:12:41.690654 kubelet[2695]: I0117 12:12:41.690614 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/199ec99043c7c01d22e26b52d7925b09-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"199ec99043c7c01d22e26b52d7925b09\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.691136 kubelet[2695]: I0117 12:12:41.690965 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/199ec99043c7c01d22e26b52d7925b09-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"199ec99043c7c01d22e26b52d7925b09\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.691136 kubelet[2695]: I0117 12:12:41.691005 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/199ec99043c7c01d22e26b52d7925b09-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"199ec99043c7c01d22e26b52d7925b09\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.691136 kubelet[2695]: I0117 12:12:41.691079 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/61ec9d271c3f1b410b0148fe1a8e5291-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"61ec9d271c3f1b410b0148fe1a8e5291\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.691136 kubelet[2695]: I0117 12:12:41.691110 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61ec9d271c3f1b410b0148fe1a8e5291-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"61ec9d271c3f1b410b0148fe1a8e5291\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.691661 kubelet[2695]: I0117 12:12:41.691438 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/914d736ec9051aa2d54dbb1c3ba555e0-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"914d736ec9051aa2d54dbb1c3ba555e0\") " pod="kube-system/kube-scheduler-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.691661 kubelet[2695]: I0117 12:12:41.691466 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61ec9d271c3f1b410b0148fe1a8e5291-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"61ec9d271c3f1b410b0148fe1a8e5291\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.691661 kubelet[2695]: I0117 12:12:41.691603 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61ec9d271c3f1b410b0148fe1a8e5291-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"61ec9d271c3f1b410b0148fe1a8e5291\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:41.691661 kubelet[2695]: I0117 12:12:41.691627 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61ec9d271c3f1b410b0148fe1a8e5291-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal\" (UID: \"61ec9d271c3f1b410b0148fe1a8e5291\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:42.163480 sudo[2727]: pam_unix(sudo:session): session closed for user root Jan 17 12:12:42.466892 kubelet[2695]: I0117 12:12:42.466770 2695 apiserver.go:52] "Watching apiserver" Jan 17 12:12:42.489579 kubelet[2695]: I0117 12:12:42.489546 2695 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:12:42.557187 kubelet[2695]: W0117 12:12:42.557153 2695 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:12:42.559389 kubelet[2695]: E0117 12:12:42.559358 2695 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-0-e-f0dad07f0f.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-e-f0dad07f0f.novalocal" Jan 17 12:12:42.590925 kubelet[2695]: I0117 12:12:42.590849 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-e-f0dad07f0f.novalocal" podStartSLOduration=1.590828327 podStartE2EDuration="1.590828327s" podCreationTimestamp="2025-01-17 12:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:12:42.577285871 +0000 UTC m=+1.243890871" watchObservedRunningTime="2025-01-17 12:12:42.590828327 +0000 UTC m=+1.257433296" Jan 17 12:12:42.603126 kubelet[2695]: I0117 12:12:42.603057 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-e-f0dad07f0f.novalocal" podStartSLOduration=1.603036152 podStartE2EDuration="1.603036152s" podCreationTimestamp="2025-01-17 12:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:12:42.591075493 +0000 UTC m=+1.257680462" watchObservedRunningTime="2025-01-17 12:12:42.603036152 +0000 UTC m=+1.269641131" Jan 17 12:12:42.613105 kubelet[2695]: I0117 12:12:42.613046 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-e-f0dad07f0f.novalocal" podStartSLOduration=1.613027615 podStartE2EDuration="1.613027615s" podCreationTimestamp="2025-01-17 12:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:12:42.603324485 +0000 UTC m=+1.269929454" watchObservedRunningTime="2025-01-17 12:12:42.613027615 +0000 UTC m=+1.279632594" Jan 17 12:12:44.677196 sudo[1739]: pam_unix(sudo:session): session closed for user root Jan 17 12:12:44.960071 sshd[1736]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:44.967929 systemd-logind[1464]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:12:44.968136 systemd[1]: sshd@8-172.24.4.139:22-172.24.4.1:54830.service: Deactivated successfully. Jan 17 12:12:44.971514 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:12:44.971915 systemd[1]: session-11.scope: Consumed 7.585s CPU time, 193.5M memory peak, 0B memory swap peak. Jan 17 12:12:44.976139 systemd-logind[1464]: Removed session 11. Jan 17 12:12:55.692492 kubelet[2695]: I0117 12:12:55.692442 2695 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:12:55.693718 containerd[1482]: time="2025-01-17T12:12:55.693618333Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:12:55.694293 kubelet[2695]: I0117 12:12:55.693937 2695 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:12:56.475407 kubelet[2695]: I0117 12:12:56.475334 2695 topology_manager.go:215] "Topology Admit Handler" podUID="7695dff5-c6b1-4196-ac5e-f33666b77f54" podNamespace="kube-system" podName="kube-proxy-wqpbw" Jan 17 12:12:56.506434 systemd[1]: Created slice kubepods-besteffort-pod7695dff5_c6b1_4196_ac5e_f33666b77f54.slice - libcontainer container kubepods-besteffort-pod7695dff5_c6b1_4196_ac5e_f33666b77f54.slice. Jan 17 12:12:56.518643 kubelet[2695]: I0117 12:12:56.517819 2695 topology_manager.go:215] "Topology Admit Handler" podUID="b0453c5e-fa49-4d52-b94e-340e3fb505d0" podNamespace="kube-system" podName="cilium-qlnmc" Jan 17 12:12:56.527529 systemd[1]: Created slice kubepods-burstable-podb0453c5e_fa49_4d52_b94e_340e3fb505d0.slice - libcontainer container kubepods-burstable-podb0453c5e_fa49_4d52_b94e_340e3fb505d0.slice. Jan 17 12:12:56.593621 kubelet[2695]: I0117 12:12:56.593552 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z47jr\" (UniqueName: \"kubernetes.io/projected/b0453c5e-fa49-4d52-b94e-340e3fb505d0-kube-api-access-z47jr\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.593621 kubelet[2695]: I0117 12:12:56.593611 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7695dff5-c6b1-4196-ac5e-f33666b77f54-lib-modules\") pod \"kube-proxy-wqpbw\" (UID: \"7695dff5-c6b1-4196-ac5e-f33666b77f54\") " pod="kube-system/kube-proxy-wqpbw" Jan 17 12:12:56.593799 kubelet[2695]: I0117 12:12:56.593636 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjl9z\" (UniqueName: \"kubernetes.io/projected/7695dff5-c6b1-4196-ac5e-f33666b77f54-kube-api-access-vjl9z\") pod \"kube-proxy-wqpbw\" (UID: \"7695dff5-c6b1-4196-ac5e-f33666b77f54\") " pod="kube-system/kube-proxy-wqpbw" Jan 17 12:12:56.593799 kubelet[2695]: I0117 12:12:56.593657 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-bpf-maps\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.593799 kubelet[2695]: I0117 12:12:56.593678 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cilium-cgroup\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.593799 kubelet[2695]: I0117 12:12:56.593704 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7695dff5-c6b1-4196-ac5e-f33666b77f54-xtables-lock\") pod \"kube-proxy-wqpbw\" (UID: \"7695dff5-c6b1-4196-ac5e-f33666b77f54\") " pod="kube-system/kube-proxy-wqpbw" Jan 17 12:12:56.593799 kubelet[2695]: I0117 12:12:56.593722 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cni-path\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.593799 kubelet[2695]: I0117 12:12:56.593741 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-host-proc-sys-kernel\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.594524 kubelet[2695]: I0117 12:12:56.593760 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0453c5e-fa49-4d52-b94e-340e3fb505d0-clustermesh-secrets\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.594524 kubelet[2695]: I0117 12:12:56.593780 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cilium-config-path\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.594524 kubelet[2695]: I0117 12:12:56.593821 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0453c5e-fa49-4d52-b94e-340e3fb505d0-hubble-tls\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.594524 kubelet[2695]: I0117 12:12:56.593844 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-host-proc-sys-net\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.594524 kubelet[2695]: I0117 12:12:56.593867 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-lib-modules\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.594524 kubelet[2695]: I0117 12:12:56.593889 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cilium-run\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.594675 kubelet[2695]: I0117 12:12:56.593910 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-hostproc\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.594675 kubelet[2695]: I0117 12:12:56.593928 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-etc-cni-netd\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.594675 kubelet[2695]: I0117 12:12:56.593949 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7695dff5-c6b1-4196-ac5e-f33666b77f54-kube-proxy\") pod \"kube-proxy-wqpbw\" (UID: \"7695dff5-c6b1-4196-ac5e-f33666b77f54\") " pod="kube-system/kube-proxy-wqpbw" Jan 17 12:12:56.594675 kubelet[2695]: I0117 12:12:56.593974 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-xtables-lock\") pod \"cilium-qlnmc\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " pod="kube-system/cilium-qlnmc" Jan 17 12:12:56.666633 kubelet[2695]: I0117 12:12:56.666545 2695 topology_manager.go:215] "Topology Admit Handler" podUID="eccf5032-d2ef-4407-a870-5047d0da5d97" podNamespace="kube-system" podName="cilium-operator-599987898-xs5rk" Jan 17 12:12:56.678068 systemd[1]: Created slice kubepods-besteffort-podeccf5032_d2ef_4407_a870_5047d0da5d97.slice - libcontainer container kubepods-besteffort-podeccf5032_d2ef_4407_a870_5047d0da5d97.slice. Jan 17 12:12:56.795253 kubelet[2695]: I0117 12:12:56.794807 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eccf5032-d2ef-4407-a870-5047d0da5d97-cilium-config-path\") pod \"cilium-operator-599987898-xs5rk\" (UID: \"eccf5032-d2ef-4407-a870-5047d0da5d97\") " pod="kube-system/cilium-operator-599987898-xs5rk" Jan 17 12:12:56.795253 kubelet[2695]: I0117 12:12:56.794857 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hg29\" (UniqueName: \"kubernetes.io/projected/eccf5032-d2ef-4407-a870-5047d0da5d97-kube-api-access-9hg29\") pod \"cilium-operator-599987898-xs5rk\" (UID: \"eccf5032-d2ef-4407-a870-5047d0da5d97\") " pod="kube-system/cilium-operator-599987898-xs5rk" Jan 17 12:12:56.819740 containerd[1482]: time="2025-01-17T12:12:56.819700556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wqpbw,Uid:7695dff5-c6b1-4196-ac5e-f33666b77f54,Namespace:kube-system,Attempt:0,}" Jan 17 12:12:56.836785 containerd[1482]: time="2025-01-17T12:12:56.836709830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qlnmc,Uid:b0453c5e-fa49-4d52-b94e-340e3fb505d0,Namespace:kube-system,Attempt:0,}" Jan 17 12:12:56.868331 containerd[1482]: time="2025-01-17T12:12:56.868192930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:12:56.868331 containerd[1482]: time="2025-01-17T12:12:56.868289701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:12:56.869976 containerd[1482]: time="2025-01-17T12:12:56.869360312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:56.870148 containerd[1482]: time="2025-01-17T12:12:56.870102376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:56.881077 containerd[1482]: time="2025-01-17T12:12:56.880639645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:12:56.881077 containerd[1482]: time="2025-01-17T12:12:56.880713604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:12:56.881077 containerd[1482]: time="2025-01-17T12:12:56.880733602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:56.881077 containerd[1482]: time="2025-01-17T12:12:56.880825816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:56.899595 systemd[1]: Started cri-containerd-f5229d87ad23b0f196505192989ef117e49c5b2bbf87d7d67e36c5c9d9453cea.scope - libcontainer container f5229d87ad23b0f196505192989ef117e49c5b2bbf87d7d67e36c5c9d9453cea. Jan 17 12:12:56.909816 systemd[1]: Started cri-containerd-295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117.scope - libcontainer container 295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117. Jan 17 12:12:56.945616 containerd[1482]: time="2025-01-17T12:12:56.945001965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wqpbw,Uid:7695dff5-c6b1-4196-ac5e-f33666b77f54,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5229d87ad23b0f196505192989ef117e49c5b2bbf87d7d67e36c5c9d9453cea\"" Jan 17 12:12:56.949523 containerd[1482]: time="2025-01-17T12:12:56.949471890Z" level=info msg="CreateContainer within sandbox \"f5229d87ad23b0f196505192989ef117e49c5b2bbf87d7d67e36c5c9d9453cea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:12:56.954123 containerd[1482]: time="2025-01-17T12:12:56.954084553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qlnmc,Uid:b0453c5e-fa49-4d52-b94e-340e3fb505d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\"" Jan 17 12:12:56.956348 containerd[1482]: time="2025-01-17T12:12:56.956165010Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 12:12:56.982058 containerd[1482]: time="2025-01-17T12:12:56.981999389Z" level=info msg="CreateContainer within sandbox \"f5229d87ad23b0f196505192989ef117e49c5b2bbf87d7d67e36c5c9d9453cea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a0272a8c5b90ea50cbe795cb006c0f5ae971f3eb1698e9346c33385e7272a813\"" Jan 17 12:12:56.984855 containerd[1482]: time="2025-01-17T12:12:56.983073266Z" level=info msg="StartContainer for \"a0272a8c5b90ea50cbe795cb006c0f5ae971f3eb1698e9346c33385e7272a813\"" Jan 17 12:12:56.985595 containerd[1482]: time="2025-01-17T12:12:56.985545350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xs5rk,Uid:eccf5032-d2ef-4407-a870-5047d0da5d97,Namespace:kube-system,Attempt:0,}" Jan 17 12:12:57.018902 systemd[1]: Started cri-containerd-a0272a8c5b90ea50cbe795cb006c0f5ae971f3eb1698e9346c33385e7272a813.scope - libcontainer container a0272a8c5b90ea50cbe795cb006c0f5ae971f3eb1698e9346c33385e7272a813. Jan 17 12:12:57.026255 containerd[1482]: time="2025-01-17T12:12:57.026087718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:12:57.027291 containerd[1482]: time="2025-01-17T12:12:57.026209146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:12:57.027291 containerd[1482]: time="2025-01-17T12:12:57.026351954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:57.028390 containerd[1482]: time="2025-01-17T12:12:57.028277230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:57.054858 systemd[1]: Started cri-containerd-c681f5a5e68ef73d70d837b809846b7c018be6858f8d9453318449d9838e1ea6.scope - libcontainer container c681f5a5e68ef73d70d837b809846b7c018be6858f8d9453318449d9838e1ea6. Jan 17 12:12:57.077666 containerd[1482]: time="2025-01-17T12:12:57.077445505Z" level=info msg="StartContainer for \"a0272a8c5b90ea50cbe795cb006c0f5ae971f3eb1698e9346c33385e7272a813\" returns successfully" Jan 17 12:12:57.131476 containerd[1482]: time="2025-01-17T12:12:57.131410929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xs5rk,Uid:eccf5032-d2ef-4407-a870-5047d0da5d97,Namespace:kube-system,Attempt:0,} returns sandbox id \"c681f5a5e68ef73d70d837b809846b7c018be6858f8d9453318449d9838e1ea6\"" Jan 17 12:12:57.622281 kubelet[2695]: I0117 12:12:57.622094 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wqpbw" podStartSLOduration=1.622037587 podStartE2EDuration="1.622037587s" podCreationTimestamp="2025-01-17 12:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:12:57.621787267 +0000 UTC m=+16.288392296" watchObservedRunningTime="2025-01-17 12:12:57.622037587 +0000 UTC m=+16.288642606" Jan 17 12:13:04.111658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1048000772.mount: Deactivated successfully. Jan 17 12:13:07.396568 containerd[1482]: time="2025-01-17T12:13:07.396446626Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:07.399417 containerd[1482]: time="2025-01-17T12:13:07.399210533Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735295" Jan 17 12:13:07.400518 containerd[1482]: time="2025-01-17T12:13:07.400421877Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:07.405936 containerd[1482]: time="2025-01-17T12:13:07.405866595Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.44964508s" Jan 17 12:13:07.406391 containerd[1482]: time="2025-01-17T12:13:07.406165586Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 12:13:07.410033 containerd[1482]: time="2025-01-17T12:13:07.408947939Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 12:13:07.413051 containerd[1482]: time="2025-01-17T12:13:07.412762818Z" level=info msg="CreateContainer within sandbox \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:13:07.449354 containerd[1482]: time="2025-01-17T12:13:07.449287867Z" level=info msg="CreateContainer within sandbox \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d\"" Jan 17 12:13:07.450616 containerd[1482]: time="2025-01-17T12:13:07.450401157Z" level=info msg="StartContainer for \"607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d\"" Jan 17 12:13:07.515695 systemd[1]: Started cri-containerd-607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d.scope - libcontainer container 607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d. Jan 17 12:13:07.552681 containerd[1482]: time="2025-01-17T12:13:07.552618346Z" level=info msg="StartContainer for \"607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d\" returns successfully" Jan 17 12:13:07.564556 systemd[1]: cri-containerd-607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d.scope: Deactivated successfully. Jan 17 12:13:08.307957 containerd[1482]: time="2025-01-17T12:13:08.307746441Z" level=info msg="shim disconnected" id=607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d namespace=k8s.io Jan 17 12:13:08.307957 containerd[1482]: time="2025-01-17T12:13:08.307883618Z" level=warning msg="cleaning up after shim disconnected" id=607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d namespace=k8s.io Jan 17 12:13:08.307957 containerd[1482]: time="2025-01-17T12:13:08.307921240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:13:08.439006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d-rootfs.mount: Deactivated successfully. Jan 17 12:13:08.638816 containerd[1482]: time="2025-01-17T12:13:08.638700783Z" level=info msg="CreateContainer within sandbox \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:13:08.685905 containerd[1482]: time="2025-01-17T12:13:08.685637309Z" level=info msg="CreateContainer within sandbox \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9\"" Jan 17 12:13:08.688316 containerd[1482]: time="2025-01-17T12:13:08.687218877Z" level=info msg="StartContainer for \"058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9\"" Jan 17 12:13:08.737397 systemd[1]: Started cri-containerd-058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9.scope - libcontainer container 058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9. Jan 17 12:13:08.766361 containerd[1482]: time="2025-01-17T12:13:08.766311530Z" level=info msg="StartContainer for \"058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9\" returns successfully" Jan 17 12:13:08.779727 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:13:08.780337 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:13:08.780626 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:13:08.786811 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:13:08.787092 systemd[1]: cri-containerd-058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9.scope: Deactivated successfully. Jan 17 12:13:08.808266 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:13:08.825331 containerd[1482]: time="2025-01-17T12:13:08.825256252Z" level=info msg="shim disconnected" id=058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9 namespace=k8s.io Jan 17 12:13:08.825331 containerd[1482]: time="2025-01-17T12:13:08.825319431Z" level=warning msg="cleaning up after shim disconnected" id=058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9 namespace=k8s.io Jan 17 12:13:08.825331 containerd[1482]: time="2025-01-17T12:13:08.825333036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:13:09.438284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9-rootfs.mount: Deactivated successfully. Jan 17 12:13:09.640739 containerd[1482]: time="2025-01-17T12:13:09.639054630Z" level=info msg="CreateContainer within sandbox \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:13:09.681919 containerd[1482]: time="2025-01-17T12:13:09.681847766Z" level=info msg="CreateContainer within sandbox \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c\"" Jan 17 12:13:09.684297 containerd[1482]: time="2025-01-17T12:13:09.682447001Z" level=info msg="StartContainer for \"a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c\"" Jan 17 12:13:09.728755 systemd[1]: Started cri-containerd-a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c.scope - libcontainer container a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c. Jan 17 12:13:09.767742 containerd[1482]: time="2025-01-17T12:13:09.767689469Z" level=info msg="StartContainer for \"a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c\" returns successfully" Jan 17 12:13:09.769230 systemd[1]: cri-containerd-a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c.scope: Deactivated successfully. Jan 17 12:13:09.799048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c-rootfs.mount: Deactivated successfully. Jan 17 12:13:09.806191 containerd[1482]: time="2025-01-17T12:13:09.806132532Z" level=info msg="shim disconnected" id=a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c namespace=k8s.io Jan 17 12:13:09.806334 containerd[1482]: time="2025-01-17T12:13:09.806190341Z" level=warning msg="cleaning up after shim disconnected" id=a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c namespace=k8s.io Jan 17 12:13:09.806334 containerd[1482]: time="2025-01-17T12:13:09.806205359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:13:10.662323 containerd[1482]: time="2025-01-17T12:13:10.659324768Z" level=info msg="CreateContainer within sandbox \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:13:10.697839 containerd[1482]: time="2025-01-17T12:13:10.697716752Z" level=info msg="CreateContainer within sandbox \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0\"" Jan 17 12:13:10.703011 containerd[1482]: time="2025-01-17T12:13:10.702924986Z" level=info msg="StartContainer for \"fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0\"" Jan 17 12:13:10.754410 systemd[1]: Started cri-containerd-fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0.scope - libcontainer container fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0. Jan 17 12:13:10.778829 systemd[1]: cri-containerd-fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0.scope: Deactivated successfully. Jan 17 12:13:10.783837 containerd[1482]: time="2025-01-17T12:13:10.783796706Z" level=info msg="StartContainer for \"fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0\" returns successfully" Jan 17 12:13:10.803662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0-rootfs.mount: Deactivated successfully. Jan 17 12:13:10.816744 containerd[1482]: time="2025-01-17T12:13:10.816636091Z" level=info msg="shim disconnected" id=fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0 namespace=k8s.io Jan 17 12:13:10.816744 containerd[1482]: time="2025-01-17T12:13:10.816736149Z" level=warning msg="cleaning up after shim disconnected" id=fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0 namespace=k8s.io Jan 17 12:13:10.816744 containerd[1482]: time="2025-01-17T12:13:10.816747540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:13:11.684325 containerd[1482]: time="2025-01-17T12:13:11.681100652Z" level=info msg="CreateContainer within sandbox \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:13:11.753006 containerd[1482]: time="2025-01-17T12:13:11.752965075Z" level=info msg="CreateContainer within sandbox \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\"" Jan 17 12:13:11.753964 containerd[1482]: time="2025-01-17T12:13:11.753927872Z" level=info msg="StartContainer for \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\"" Jan 17 12:13:11.782615 systemd[1]: run-containerd-runc-k8s.io-5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b-runc.uftbk0.mount: Deactivated successfully. Jan 17 12:13:11.798424 systemd[1]: Started cri-containerd-5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b.scope - libcontainer container 5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b. Jan 17 12:13:11.833114 containerd[1482]: time="2025-01-17T12:13:11.832961898Z" level=info msg="StartContainer for \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\" returns successfully" Jan 17 12:13:11.914188 kubelet[2695]: I0117 12:13:11.913936 2695 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:13:11.988305 kubelet[2695]: I0117 12:13:11.987696 2695 topology_manager.go:215] "Topology Admit Handler" podUID="5ddc70f0-45de-4a66-a59f-1c02f495addb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nnxt4" Jan 17 12:13:11.991554 kubelet[2695]: I0117 12:13:11.991434 2695 topology_manager.go:215] "Topology Admit Handler" podUID="b9b6b3b2-2c3b-4990-9ad5-65437b54333d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9dqmb" Jan 17 12:13:11.998204 systemd[1]: Created slice kubepods-burstable-pod5ddc70f0_45de_4a66_a59f_1c02f495addb.slice - libcontainer container kubepods-burstable-pod5ddc70f0_45de_4a66_a59f_1c02f495addb.slice. Jan 17 12:13:12.007203 systemd[1]: Created slice kubepods-burstable-podb9b6b3b2_2c3b_4990_9ad5_65437b54333d.slice - libcontainer container kubepods-burstable-podb9b6b3b2_2c3b_4990_9ad5_65437b54333d.slice. Jan 17 12:13:12.110386 kubelet[2695]: I0117 12:13:12.110274 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9b6b3b2-2c3b-4990-9ad5-65437b54333d-config-volume\") pod \"coredns-7db6d8ff4d-9dqmb\" (UID: \"b9b6b3b2-2c3b-4990-9ad5-65437b54333d\") " pod="kube-system/coredns-7db6d8ff4d-9dqmb" Jan 17 12:13:12.110386 kubelet[2695]: I0117 12:13:12.110359 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ddc70f0-45de-4a66-a59f-1c02f495addb-config-volume\") pod \"coredns-7db6d8ff4d-nnxt4\" (UID: \"5ddc70f0-45de-4a66-a59f-1c02f495addb\") " pod="kube-system/coredns-7db6d8ff4d-nnxt4" Jan 17 12:13:12.110386 kubelet[2695]: I0117 12:13:12.110390 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sq8s\" (UniqueName: \"kubernetes.io/projected/5ddc70f0-45de-4a66-a59f-1c02f495addb-kube-api-access-2sq8s\") pod \"coredns-7db6d8ff4d-nnxt4\" (UID: \"5ddc70f0-45de-4a66-a59f-1c02f495addb\") " pod="kube-system/coredns-7db6d8ff4d-nnxt4" Jan 17 12:13:12.110572 kubelet[2695]: I0117 12:13:12.110432 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d292g\" (UniqueName: \"kubernetes.io/projected/b9b6b3b2-2c3b-4990-9ad5-65437b54333d-kube-api-access-d292g\") pod \"coredns-7db6d8ff4d-9dqmb\" (UID: \"b9b6b3b2-2c3b-4990-9ad5-65437b54333d\") " pod="kube-system/coredns-7db6d8ff4d-9dqmb" Jan 17 12:13:12.304139 containerd[1482]: time="2025-01-17T12:13:12.303560423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnxt4,Uid:5ddc70f0-45de-4a66-a59f-1c02f495addb,Namespace:kube-system,Attempt:0,}" Jan 17 12:13:12.310344 containerd[1482]: time="2025-01-17T12:13:12.310305760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9dqmb,Uid:b9b6b3b2-2c3b-4990-9ad5-65437b54333d,Namespace:kube-system,Attempt:0,}" Jan 17 12:13:12.703225 kubelet[2695]: I0117 12:13:12.702331 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qlnmc" podStartSLOduration=6.249945994 podStartE2EDuration="16.70229551s" podCreationTimestamp="2025-01-17 12:12:56 +0000 UTC" firstStartedPulling="2025-01-17 12:12:56.955550415 +0000 UTC m=+15.622155394" lastFinishedPulling="2025-01-17 12:13:07.407899931 +0000 UTC m=+26.074504910" observedRunningTime="2025-01-17 12:13:12.700626839 +0000 UTC m=+31.367231858" watchObservedRunningTime="2025-01-17 12:13:12.70229551 +0000 UTC m=+31.368900529" Jan 17 12:13:14.365690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2727952199.mount: Deactivated successfully. Jan 17 12:13:15.001586 containerd[1482]: time="2025-01-17T12:13:15.000727681Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:15.003152 containerd[1482]: time="2025-01-17T12:13:15.003121352Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907261" Jan 17 12:13:15.005020 containerd[1482]: time="2025-01-17T12:13:15.004997973Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:15.006562 containerd[1482]: time="2025-01-17T12:13:15.006164091Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.597158644s" Jan 17 12:13:15.006651 containerd[1482]: time="2025-01-17T12:13:15.006632800Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 12:13:15.009587 containerd[1482]: time="2025-01-17T12:13:15.009524857Z" level=info msg="CreateContainer within sandbox \"c681f5a5e68ef73d70d837b809846b7c018be6858f8d9453318449d9838e1ea6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 12:13:15.037205 containerd[1482]: time="2025-01-17T12:13:15.036883369Z" level=info msg="CreateContainer within sandbox \"c681f5a5e68ef73d70d837b809846b7c018be6858f8d9453318449d9838e1ea6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\"" Jan 17 12:13:15.038196 containerd[1482]: time="2025-01-17T12:13:15.038124517Z" level=info msg="StartContainer for \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\"" Jan 17 12:13:15.091501 systemd[1]: Started cri-containerd-f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7.scope - libcontainer container f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7. Jan 17 12:13:15.132460 containerd[1482]: time="2025-01-17T12:13:15.131115273Z" level=info msg="StartContainer for \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\" returns successfully" Jan 17 12:13:18.978741 systemd-networkd[1394]: cilium_host: Link UP Jan 17 12:13:18.979113 systemd-networkd[1394]: cilium_net: Link UP Jan 17 12:13:18.982537 systemd-networkd[1394]: cilium_net: Gained carrier Jan 17 12:13:18.982944 systemd-networkd[1394]: cilium_host: Gained carrier Jan 17 12:13:18.983286 systemd-networkd[1394]: cilium_net: Gained IPv6LL Jan 17 12:13:18.986793 systemd-networkd[1394]: cilium_host: Gained IPv6LL Jan 17 12:13:19.107651 systemd-networkd[1394]: cilium_vxlan: Link UP Jan 17 12:13:19.107660 systemd-networkd[1394]: cilium_vxlan: Gained carrier Jan 17 12:13:19.539312 kernel: NET: Registered PF_ALG protocol family Jan 17 12:13:20.509821 systemd-networkd[1394]: lxc_health: Link UP Jan 17 12:13:20.513481 systemd-networkd[1394]: lxc_health: Gained carrier Jan 17 12:13:20.869050 kubelet[2695]: I0117 12:13:20.867315 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-xs5rk" podStartSLOduration=6.9935853980000005 podStartE2EDuration="24.867296471s" podCreationTimestamp="2025-01-17 12:12:56 +0000 UTC" firstStartedPulling="2025-01-17 12:12:57.133958302 +0000 UTC m=+15.800563271" lastFinishedPulling="2025-01-17 12:13:15.007669375 +0000 UTC m=+33.674274344" observedRunningTime="2025-01-17 12:13:15.711044322 +0000 UTC m=+34.377649291" watchObservedRunningTime="2025-01-17 12:13:20.867296471 +0000 UTC m=+39.533901440" Jan 17 12:13:20.897304 systemd-networkd[1394]: lxcd7b7abc397ad: Link UP Jan 17 12:13:20.903378 kernel: eth0: renamed from tmp8e519 Jan 17 12:13:20.918493 systemd-networkd[1394]: lxcd7b7abc397ad: Gained carrier Jan 17 12:13:20.932960 systemd-networkd[1394]: lxc0dba7ca828ea: Link UP Jan 17 12:13:20.939283 kernel: eth0: renamed from tmpc35f7 Jan 17 12:13:20.950295 systemd-networkd[1394]: lxc0dba7ca828ea: Gained carrier Jan 17 12:13:21.090453 systemd-networkd[1394]: cilium_vxlan: Gained IPv6LL Jan 17 12:13:22.117654 systemd-networkd[1394]: lxcd7b7abc397ad: Gained IPv6LL Jan 17 12:13:22.370515 systemd-networkd[1394]: lxc_health: Gained IPv6LL Jan 17 12:13:22.882667 systemd-networkd[1394]: lxc0dba7ca828ea: Gained IPv6LL Jan 17 12:13:25.423443 containerd[1482]: time="2025-01-17T12:13:25.423327468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:13:25.424669 containerd[1482]: time="2025-01-17T12:13:25.423513306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:13:25.424669 containerd[1482]: time="2025-01-17T12:13:25.424439555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:13:25.424669 containerd[1482]: time="2025-01-17T12:13:25.424534773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:13:25.465489 systemd[1]: Started cri-containerd-8e51935fc6094a8e206912cfc3407e329822dbc9c9c224ea187d4d3b0c759b7a.scope - libcontainer container 8e51935fc6094a8e206912cfc3407e329822dbc9c9c224ea187d4d3b0c759b7a. Jan 17 12:13:25.499312 containerd[1482]: time="2025-01-17T12:13:25.498603528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:13:25.499312 containerd[1482]: time="2025-01-17T12:13:25.498691453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:13:25.499312 containerd[1482]: time="2025-01-17T12:13:25.498863726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:13:25.499753 containerd[1482]: time="2025-01-17T12:13:25.499704744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:13:25.539774 systemd[1]: Started cri-containerd-c35f7eebb700ab5dfb432768c7bdc39478eac23d606a6c56844e93b0b2a6346b.scope - libcontainer container c35f7eebb700ab5dfb432768c7bdc39478eac23d606a6c56844e93b0b2a6346b. Jan 17 12:13:25.542402 containerd[1482]: time="2025-01-17T12:13:25.542365041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnxt4,Uid:5ddc70f0-45de-4a66-a59f-1c02f495addb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e51935fc6094a8e206912cfc3407e329822dbc9c9c224ea187d4d3b0c759b7a\"" Jan 17 12:13:25.548453 containerd[1482]: time="2025-01-17T12:13:25.548419167Z" level=info msg="CreateContainer within sandbox \"8e51935fc6094a8e206912cfc3407e329822dbc9c9c224ea187d4d3b0c759b7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:13:25.580398 containerd[1482]: time="2025-01-17T12:13:25.580350317Z" level=info msg="CreateContainer within sandbox \"8e51935fc6094a8e206912cfc3407e329822dbc9c9c224ea187d4d3b0c759b7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd96a5ebe22576406f15e191c537cf8d9adfc4ae593f2a0dacce5bf671f4f6b0\"" Jan 17 12:13:25.582338 containerd[1482]: time="2025-01-17T12:13:25.581301382Z" level=info msg="StartContainer for \"dd96a5ebe22576406f15e191c537cf8d9adfc4ae593f2a0dacce5bf671f4f6b0\"" Jan 17 12:13:25.613287 containerd[1482]: time="2025-01-17T12:13:25.613216360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9dqmb,Uid:b9b6b3b2-2c3b-4990-9ad5-65437b54333d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c35f7eebb700ab5dfb432768c7bdc39478eac23d606a6c56844e93b0b2a6346b\"" Jan 17 12:13:25.622202 containerd[1482]: time="2025-01-17T12:13:25.622046605Z" level=info msg="CreateContainer within sandbox \"c35f7eebb700ab5dfb432768c7bdc39478eac23d606a6c56844e93b0b2a6346b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:13:25.641264 systemd[1]: Started cri-containerd-dd96a5ebe22576406f15e191c537cf8d9adfc4ae593f2a0dacce5bf671f4f6b0.scope - libcontainer container dd96a5ebe22576406f15e191c537cf8d9adfc4ae593f2a0dacce5bf671f4f6b0. Jan 17 12:13:25.663958 containerd[1482]: time="2025-01-17T12:13:25.663658996Z" level=info msg="CreateContainer within sandbox \"c35f7eebb700ab5dfb432768c7bdc39478eac23d606a6c56844e93b0b2a6346b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ef061be3feccd34ecbd513d3f58294210f23d06e44ebf584a7f7f55b9e0b902\"" Jan 17 12:13:25.666307 containerd[1482]: time="2025-01-17T12:13:25.665430098Z" level=info msg="StartContainer for \"7ef061be3feccd34ecbd513d3f58294210f23d06e44ebf584a7f7f55b9e0b902\"" Jan 17 12:13:25.710701 containerd[1482]: time="2025-01-17T12:13:25.709970942Z" level=info msg="StartContainer for \"dd96a5ebe22576406f15e191c537cf8d9adfc4ae593f2a0dacce5bf671f4f6b0\" returns successfully" Jan 17 12:13:25.716483 systemd[1]: Started cri-containerd-7ef061be3feccd34ecbd513d3f58294210f23d06e44ebf584a7f7f55b9e0b902.scope - libcontainer container 7ef061be3feccd34ecbd513d3f58294210f23d06e44ebf584a7f7f55b9e0b902. Jan 17 12:13:25.743070 kubelet[2695]: I0117 12:13:25.742437 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nnxt4" podStartSLOduration=29.742417118 podStartE2EDuration="29.742417118s" podCreationTimestamp="2025-01-17 12:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:13:25.740976717 +0000 UTC m=+44.407581696" watchObservedRunningTime="2025-01-17 12:13:25.742417118 +0000 UTC m=+44.409022087" Jan 17 12:13:25.794322 containerd[1482]: time="2025-01-17T12:13:25.793293608Z" level=info msg="StartContainer for \"7ef061be3feccd34ecbd513d3f58294210f23d06e44ebf584a7f7f55b9e0b902\" returns successfully" Jan 17 12:13:26.757493 kubelet[2695]: I0117 12:13:26.756661 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9dqmb" podStartSLOduration=30.756626625 podStartE2EDuration="30.756626625s" podCreationTimestamp="2025-01-17 12:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:13:26.755532193 +0000 UTC m=+45.422137212" watchObservedRunningTime="2025-01-17 12:13:26.756626625 +0000 UTC m=+45.423231644" Jan 17 12:13:47.066734 systemd[1]: Started sshd@9-172.24.4.139:22-172.24.4.1:35176.service - OpenSSH per-connection server daemon (172.24.4.1:35176). Jan 17 12:13:48.248097 sshd[4067]: Accepted publickey for core from 172.24.4.1 port 35176 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:13:48.250968 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:13:48.262862 systemd-logind[1464]: New session 12 of user core. Jan 17 12:13:48.275563 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:13:48.993673 sshd[4067]: pam_unix(sshd:session): session closed for user core Jan 17 12:13:48.999324 systemd[1]: sshd@9-172.24.4.139:22-172.24.4.1:35176.service: Deactivated successfully. Jan 17 12:13:49.002735 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:13:49.003854 systemd-logind[1464]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:13:49.005460 systemd-logind[1464]: Removed session 12. Jan 17 12:13:54.019897 systemd[1]: Started sshd@10-172.24.4.139:22-172.24.4.1:35072.service - OpenSSH per-connection server daemon (172.24.4.1:35072). Jan 17 12:13:55.352773 sshd[4081]: Accepted publickey for core from 172.24.4.1 port 35072 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:13:55.356864 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:13:55.367382 systemd-logind[1464]: New session 13 of user core. Jan 17 12:13:55.375591 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:13:56.178611 sshd[4081]: pam_unix(sshd:session): session closed for user core Jan 17 12:13:56.185933 systemd[1]: sshd@10-172.24.4.139:22-172.24.4.1:35072.service: Deactivated successfully. Jan 17 12:13:56.190016 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:13:56.191744 systemd-logind[1464]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:13:56.194151 systemd-logind[1464]: Removed session 13. Jan 17 12:14:01.199860 systemd[1]: Started sshd@11-172.24.4.139:22-172.24.4.1:35076.service - OpenSSH per-connection server daemon (172.24.4.1:35076). Jan 17 12:14:02.431912 sshd[4097]: Accepted publickey for core from 172.24.4.1 port 35076 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:14:02.436453 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:02.455366 systemd-logind[1464]: New session 14 of user core. Jan 17 12:14:02.462555 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:14:03.127105 sshd[4097]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:03.136024 systemd[1]: sshd@11-172.24.4.139:22-172.24.4.1:35076.service: Deactivated successfully. Jan 17 12:14:03.141886 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:14:03.145389 systemd-logind[1464]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:14:03.148499 systemd-logind[1464]: Removed session 14. Jan 17 12:14:05.010626 update_engine[1465]: I20250117 12:14:05.010535 1465 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 12:14:05.010626 update_engine[1465]: I20250117 12:14:05.010617 1465 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 12:14:05.011485 update_engine[1465]: I20250117 12:14:05.010951 1465 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 12:14:05.011945 update_engine[1465]: I20250117 12:14:05.011837 1465 omaha_request_params.cc:62] Current group set to lts Jan 17 12:14:05.012094 update_engine[1465]: I20250117 12:14:05.012049 1465 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 12:14:05.012094 update_engine[1465]: I20250117 12:14:05.012074 1465 update_attempter.cc:643] Scheduling an action processor start. Jan 17 12:14:05.012215 update_engine[1465]: I20250117 12:14:05.012104 1465 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 12:14:05.012329 update_engine[1465]: I20250117 12:14:05.012203 1465 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 12:14:05.012392 update_engine[1465]: I20250117 12:14:05.012365 1465 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 12:14:05.012450 update_engine[1465]: I20250117 12:14:05.012391 1465 omaha_request_action.cc:272] Request: Jan 17 12:14:05.012450 update_engine[1465]: Jan 17 12:14:05.012450 update_engine[1465]: Jan 17 12:14:05.012450 update_engine[1465]: Jan 17 12:14:05.012450 update_engine[1465]: Jan 17 12:14:05.012450 update_engine[1465]: Jan 17 12:14:05.012450 update_engine[1465]: Jan 17 12:14:05.012450 update_engine[1465]: Jan 17 12:14:05.012450 update_engine[1465]: Jan 17 12:14:05.012450 update_engine[1465]: I20250117 12:14:05.012405 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:14:05.013629 locksmithd[1487]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 12:14:05.014969 update_engine[1465]: I20250117 12:14:05.014904 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:14:05.015673 update_engine[1465]: I20250117 12:14:05.015463 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:14:05.028169 update_engine[1465]: E20250117 12:14:05.028081 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:14:05.028330 update_engine[1465]: I20250117 12:14:05.028215 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 12:14:08.156822 systemd[1]: Started sshd@12-172.24.4.139:22-172.24.4.1:59346.service - OpenSSH per-connection server daemon (172.24.4.1:59346). Jan 17 12:14:09.461457 sshd[4111]: Accepted publickey for core from 172.24.4.1 port 59346 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:14:09.464068 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:09.475608 systemd-logind[1464]: New session 15 of user core. Jan 17 12:14:09.480551 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:14:10.207614 sshd[4111]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:10.216090 systemd[1]: sshd@12-172.24.4.139:22-172.24.4.1:59346.service: Deactivated successfully. Jan 17 12:14:10.218823 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:14:10.219975 systemd-logind[1464]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:14:10.227572 systemd[1]: Started sshd@13-172.24.4.139:22-172.24.4.1:59348.service - OpenSSH per-connection server daemon (172.24.4.1:59348). Jan 17 12:14:10.229189 systemd-logind[1464]: Removed session 15. Jan 17 12:14:11.429009 sshd[4124]: Accepted publickey for core from 172.24.4.1 port 59348 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:14:11.432087 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:11.442379 systemd-logind[1464]: New session 16 of user core. Jan 17 12:14:11.450584 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:14:12.275151 sshd[4124]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:12.287873 systemd[1]: sshd@13-172.24.4.139:22-172.24.4.1:59348.service: Deactivated successfully. Jan 17 12:14:12.291725 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:14:12.296565 systemd-logind[1464]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:14:12.303865 systemd[1]: Started sshd@14-172.24.4.139:22-172.24.4.1:59354.service - OpenSSH per-connection server daemon (172.24.4.1:59354). Jan 17 12:14:12.310404 systemd-logind[1464]: Removed session 16. Jan 17 12:14:13.357491 sshd[4135]: Accepted publickey for core from 172.24.4.1 port 59354 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:14:13.361595 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:13.374730 systemd-logind[1464]: New session 17 of user core. Jan 17 12:14:13.387785 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:14:14.062764 sshd[4135]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:14.068752 systemd[1]: sshd@14-172.24.4.139:22-172.24.4.1:59354.service: Deactivated successfully. Jan 17 12:14:14.073595 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:14:14.077625 systemd-logind[1464]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:14:14.080285 systemd-logind[1464]: Removed session 17. Jan 17 12:14:15.010987 update_engine[1465]: I20250117 12:14:15.010805 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:14:15.011695 update_engine[1465]: I20250117 12:14:15.011209 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:14:15.011695 update_engine[1465]: I20250117 12:14:15.011643 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:14:15.033516 update_engine[1465]: E20250117 12:14:15.033423 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:14:15.033643 update_engine[1465]: I20250117 12:14:15.033540 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 12:14:19.081797 systemd[1]: Started sshd@15-172.24.4.139:22-172.24.4.1:46748.service - OpenSSH per-connection server daemon (172.24.4.1:46748). Jan 17 12:14:20.295725 sshd[4147]: Accepted publickey for core from 172.24.4.1 port 46748 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:14:20.298771 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:20.309111 systemd-logind[1464]: New session 18 of user core. Jan 17 12:14:20.315597 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:14:21.416499 sshd[4147]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:21.424827 systemd[1]: sshd@15-172.24.4.139:22-172.24.4.1:46748.service: Deactivated successfully. Jan 17 12:14:21.426127 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:14:21.429315 systemd-logind[1464]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:14:21.436841 systemd[1]: Started sshd@16-172.24.4.139:22-172.24.4.1:46764.service - OpenSSH per-connection server daemon (172.24.4.1:46764). Jan 17 12:14:21.440410 systemd-logind[1464]: Removed session 18. Jan 17 12:14:22.792732 sshd[4159]: Accepted publickey for core from 172.24.4.1 port 46764 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:14:22.795804 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:22.808021 systemd-logind[1464]: New session 19 of user core. Jan 17 12:14:22.813634 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:14:23.621621 sshd[4159]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:23.636884 systemd[1]: sshd@16-172.24.4.139:22-172.24.4.1:46764.service: Deactivated successfully. Jan 17 12:14:23.640671 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:14:23.644505 systemd-logind[1464]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:14:23.654843 systemd[1]: Started sshd@17-172.24.4.139:22-172.24.4.1:46544.service - OpenSSH per-connection server daemon (172.24.4.1:46544). Jan 17 12:14:23.660921 systemd-logind[1464]: Removed session 19. Jan 17 12:14:25.011368 update_engine[1465]: I20250117 12:14:25.011150 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:14:25.012062 update_engine[1465]: I20250117 12:14:25.011677 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:14:25.012190 update_engine[1465]: I20250117 12:14:25.012074 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:14:25.023045 update_engine[1465]: E20250117 12:14:25.022935 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:14:25.023185 update_engine[1465]: I20250117 12:14:25.023101 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 12:14:25.088481 sshd[4172]: Accepted publickey for core from 172.24.4.1 port 46544 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:14:25.091399 sshd[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:25.102352 systemd-logind[1464]: New session 20 of user core. Jan 17 12:14:25.107555 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:14:27.615539 sshd[4172]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:27.625308 systemd[1]: sshd@17-172.24.4.139:22-172.24.4.1:46544.service: Deactivated successfully. Jan 17 12:14:27.626897 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:14:27.629626 systemd-logind[1464]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:14:27.638817 systemd[1]: Started sshd@18-172.24.4.139:22-172.24.4.1:46554.service - OpenSSH per-connection server daemon (172.24.4.1:46554). Jan 17 12:14:27.641637 systemd-logind[1464]: Removed session 20. Jan 17 12:14:28.779597 sshd[4193]: Accepted publickey for core from 172.24.4.1 port 46554 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:14:28.783467 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:28.796514 systemd-logind[1464]: New session 21 of user core. Jan 17 12:14:28.805597 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:14:29.654556 sshd[4193]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:29.675421 systemd[1]: sshd@18-172.24.4.139:22-172.24.4.1:46554.service: Deactivated successfully. Jan 17 12:14:29.684889 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:14:29.689379 systemd-logind[1464]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:14:29.697881 systemd[1]: Started sshd@19-172.24.4.139:22-172.24.4.1:46560.service - OpenSSH per-connection server daemon (172.24.4.1:46560). Jan 17 12:14:29.701217 systemd-logind[1464]: Removed session 21. Jan 17 12:14:30.972875 sshd[4204]: Accepted publickey for core from 172.24.4.1 port 46560 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:14:30.976077 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:30.986355 systemd-logind[1464]: New session 22 of user core. Jan 17 12:14:30.992731 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:14:31.851867 sshd[4204]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:31.856079 systemd[1]: sshd@19-172.24.4.139:22-172.24.4.1:46560.service: Deactivated successfully. Jan 17 12:14:31.858534 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:14:31.859840 systemd-logind[1464]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:14:31.861484 systemd-logind[1464]: Removed session 22. Jan 17 12:14:35.010804 update_engine[1465]: I20250117 12:14:35.010589 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:14:35.011501 update_engine[1465]: I20250117 12:14:35.011361 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:14:35.011817 update_engine[1465]: I20250117 12:14:35.011739 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:14:35.022644 update_engine[1465]: E20250117 12:14:35.022528 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:14:35.022815 update_engine[1465]: I20250117 12:14:35.022670 1465 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 12:14:35.022815 update_engine[1465]: I20250117 12:14:35.022691 1465 omaha_request_action.cc:617] Omaha request response: Jan 17 12:14:35.022943 update_engine[1465]: E20250117 12:14:35.022879 1465 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 12:14:35.022943 update_engine[1465]: I20250117 12:14:35.022928 1465 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 12:14:35.023062 update_engine[1465]: I20250117 12:14:35.022943 1465 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:14:35.023062 update_engine[1465]: I20250117 12:14:35.022954 1465 update_attempter.cc:306] Processing Done. Jan 17 12:14:35.023062 update_engine[1465]: E20250117 12:14:35.022979 1465 update_attempter.cc:619] Update failed. Jan 17 12:14:35.023062 update_engine[1465]: I20250117 12:14:35.022992 1465 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 12:14:35.023062 update_engine[1465]: I20250117 12:14:35.023005 1465 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 12:14:35.023062 update_engine[1465]: I20250117 12:14:35.023016 1465 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 12:14:35.023562 update_engine[1465]: I20250117 12:14:35.023160 1465 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 12:14:35.023562 update_engine[1465]: I20250117 12:14:35.023207 1465 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 12:14:35.023562 update_engine[1465]: I20250117 12:14:35.023221 1465 omaha_request_action.cc:272] Request: Jan 17 12:14:35.023562 update_engine[1465]: Jan 17 12:14:35.023562 update_engine[1465]: Jan 17 12:14:35.023562 update_engine[1465]: Jan 17 12:14:35.023562 update_engine[1465]: Jan 17 12:14:35.023562 update_engine[1465]: Jan 17 12:14:35.023562 update_engine[1465]: Jan 17 12:14:35.023562 update_engine[1465]: I20250117 12:14:35.023234 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:14:35.024069 update_engine[1465]: I20250117 12:14:35.023604 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:14:35.024069 update_engine[1465]: I20250117 12:14:35.023962 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:14:35.024691 locksmithd[1487]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 12:14:35.034724 update_engine[1465]: E20250117 12:14:35.034634 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:14:35.034857 update_engine[1465]: I20250117 12:14:35.034739 1465 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 12:14:35.034857 update_engine[1465]: I20250117 12:14:35.034758 1465 omaha_request_action.cc:617] Omaha request response: Jan 17 12:14:35.034857 update_engine[1465]: I20250117 12:14:35.034773 1465 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:14:35.034857 update_engine[1465]: I20250117 12:14:35.034785 1465 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:14:35.034857 update_engine[1465]: I20250117 12:14:35.034796 1465 update_attempter.cc:306] Processing Done. Jan 17 12:14:35.034857 update_engine[1465]: I20250117 12:14:35.034809 1465 update_attempter.cc:310] Error event sent. Jan 17 12:14:35.034857 update_engine[1465]: I20250117 12:14:35.034827 1465 update_check_scheduler.cc:74] Next update check in 42m38s Jan 17 12:14:35.035543 locksmithd[1487]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 12:14:36.874870 systemd[1]: Started sshd@20-172.24.4.139:22-172.24.4.1:43504.service - OpenSSH per-connection server daemon (172.24.4.1:43504). Jan 17 12:14:38.138544 sshd[4220]: Accepted publickey for core from 172.24.4.1 port 43504 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:14:38.141526 sshd[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:38.153518 systemd-logind[1464]: New session 23 of user core. Jan 17 12:14:38.164594 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:14:39.013623 sshd[4220]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:39.020058 systemd[1]: sshd@20-172.24.4.139:22-172.24.4.1:43504.service: Deactivated successfully. Jan 17 12:14:39.024078 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:14:39.027131 systemd-logind[1464]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:14:39.029927 systemd-logind[1464]: Removed session 23. Jan 17 12:14:44.033789 systemd[1]: Started sshd@21-172.24.4.139:22-172.24.4.1:57358.service - OpenSSH per-connection server daemon (172.24.4.1:57358). Jan 17 12:14:45.271223 sshd[4235]: Accepted publickey for core from 172.24.4.1 port 57358 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:14:45.274372 sshd[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:45.286299 systemd-logind[1464]: New session 24 of user core. Jan 17 12:14:45.292958 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:14:45.918724 sshd[4235]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:45.921512 systemd[1]: sshd@21-172.24.4.139:22-172.24.4.1:57358.service: Deactivated successfully. Jan 17 12:14:45.924275 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:14:45.926842 systemd-logind[1464]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:14:45.927950 systemd-logind[1464]: Removed session 24. Jan 17 12:14:50.945872 systemd[1]: Started sshd@22-172.24.4.139:22-172.24.4.1:57372.service - OpenSSH per-connection server daemon (172.24.4.1:57372). Jan 17 12:14:52.459164 sshd[4248]: Accepted publickey for core from 172.24.4.1 port 57372 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:14:52.462034 sshd[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:52.472477 systemd-logind[1464]: New session 25 of user core. Jan 17 12:14:52.480583 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:14:53.028176 sshd[4248]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:53.041847 systemd[1]: sshd@22-172.24.4.139:22-172.24.4.1:57372.service: Deactivated successfully. Jan 17 12:14:53.046786 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:14:53.048779 systemd-logind[1464]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:14:53.056803 systemd[1]: Started sshd@23-172.24.4.139:22-172.24.4.1:57376.service - OpenSSH per-connection server daemon (172.24.4.1:57376). Jan 17 12:14:53.060554 systemd-logind[1464]: Removed session 25. Jan 17 12:14:54.249341 sshd[4260]: Accepted publickey for core from 172.24.4.1 port 57376 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:14:54.252080 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:54.266922 systemd-logind[1464]: New session 26 of user core. Jan 17 12:14:54.271602 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:14:56.616361 containerd[1482]: time="2025-01-17T12:14:56.615731967Z" level=info msg="StopContainer for \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\" with timeout 30 (s)" Jan 17 12:14:56.616731 containerd[1482]: time="2025-01-17T12:14:56.616668756Z" level=info msg="Stop container \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\" with signal terminated" Jan 17 12:14:56.626773 containerd[1482]: time="2025-01-17T12:14:56.626328509Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:14:56.627552 kubelet[2695]: E0117 12:14:56.627349 2695 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:14:56.634194 containerd[1482]: time="2025-01-17T12:14:56.634066693Z" level=info msg="StopContainer for \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\" with timeout 2 (s)" Jan 17 12:14:56.634750 containerd[1482]: time="2025-01-17T12:14:56.634382566Z" level=info msg="Stop container \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\" with signal terminated" Jan 17 12:14:56.635082 systemd[1]: cri-containerd-f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7.scope: Deactivated successfully. Jan 17 12:14:56.649695 systemd-networkd[1394]: lxc_health: Link DOWN Jan 17 12:14:56.649702 systemd-networkd[1394]: lxc_health: Lost carrier Jan 17 12:14:56.666782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7-rootfs.mount: Deactivated successfully. Jan 17 12:14:56.675049 systemd[1]: cri-containerd-5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b.scope: Deactivated successfully. Jan 17 12:14:56.675477 systemd[1]: cri-containerd-5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b.scope: Consumed 8.428s CPU time. Jan 17 12:14:56.684980 containerd[1482]: time="2025-01-17T12:14:56.684780061Z" level=info msg="shim disconnected" id=f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7 namespace=k8s.io Jan 17 12:14:56.685407 containerd[1482]: time="2025-01-17T12:14:56.684957454Z" level=warning msg="cleaning up after shim disconnected" id=f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7 namespace=k8s.io Jan 17 12:14:56.685407 containerd[1482]: time="2025-01-17T12:14:56.685291762Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:14:56.708158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b-rootfs.mount: Deactivated successfully. Jan 17 12:14:56.710850 containerd[1482]: time="2025-01-17T12:14:56.710781145Z" level=info msg="shim disconnected" id=5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b namespace=k8s.io Jan 17 12:14:56.711140 containerd[1482]: time="2025-01-17T12:14:56.710961334Z" level=warning msg="cleaning up after shim disconnected" id=5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b namespace=k8s.io Jan 17 12:14:56.711140 containerd[1482]: time="2025-01-17T12:14:56.711024813Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:14:56.718875 containerd[1482]: time="2025-01-17T12:14:56.718834461Z" level=info msg="StopContainer for \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\" returns successfully" Jan 17 12:14:56.719616 containerd[1482]: time="2025-01-17T12:14:56.719586984Z" level=info msg="StopPodSandbox for \"c681f5a5e68ef73d70d837b809846b7c018be6858f8d9453318449d9838e1ea6\"" Jan 17 12:14:56.719788 containerd[1482]: time="2025-01-17T12:14:56.719743318Z" level=info msg="Container to stop \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:14:56.722974 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c681f5a5e68ef73d70d837b809846b7c018be6858f8d9453318449d9838e1ea6-shm.mount: Deactivated successfully. Jan 17 12:14:56.730867 containerd[1482]: time="2025-01-17T12:14:56.729759790Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:14:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:14:56.733483 systemd[1]: cri-containerd-c681f5a5e68ef73d70d837b809846b7c018be6858f8d9453318449d9838e1ea6.scope: Deactivated successfully. Jan 17 12:14:56.736755 containerd[1482]: time="2025-01-17T12:14:56.736708662Z" level=info msg="StopContainer for \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\" returns successfully" Jan 17 12:14:56.738497 containerd[1482]: time="2025-01-17T12:14:56.738448108Z" level=info msg="StopPodSandbox for \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\"" Jan 17 12:14:56.738635 containerd[1482]: time="2025-01-17T12:14:56.738613019Z" level=info msg="Container to stop \"607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:14:56.738706 containerd[1482]: time="2025-01-17T12:14:56.738689873Z" level=info msg="Container to stop \"a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:14:56.738771 containerd[1482]: time="2025-01-17T12:14:56.738755526Z" level=info msg="Container to stop \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:14:56.738837 containerd[1482]: time="2025-01-17T12:14:56.738821219Z" level=info msg="Container to stop \"058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:14:56.738900 containerd[1482]: time="2025-01-17T12:14:56.738885079Z" level=info msg="Container to stop \"fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:14:56.741261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117-shm.mount: Deactivated successfully. Jan 17 12:14:56.748811 systemd[1]: cri-containerd-295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117.scope: Deactivated successfully. Jan 17 12:14:56.787055 containerd[1482]: time="2025-01-17T12:14:56.786777630Z" level=info msg="shim disconnected" id=c681f5a5e68ef73d70d837b809846b7c018be6858f8d9453318449d9838e1ea6 namespace=k8s.io Jan 17 12:14:56.787055 containerd[1482]: time="2025-01-17T12:14:56.787047807Z" level=warning msg="cleaning up after shim disconnected" id=c681f5a5e68ef73d70d837b809846b7c018be6858f8d9453318449d9838e1ea6 namespace=k8s.io Jan 17 12:14:56.787055 containerd[1482]: time="2025-01-17T12:14:56.787060420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:14:56.787650 containerd[1482]: time="2025-01-17T12:14:56.787599313Z" level=info msg="shim disconnected" id=295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117 namespace=k8s.io Jan 17 12:14:56.787650 containerd[1482]: time="2025-01-17T12:14:56.787646772Z" level=warning msg="cleaning up after shim disconnected" id=295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117 namespace=k8s.io Jan 17 12:14:56.787732 containerd[1482]: time="2025-01-17T12:14:56.787657061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:14:56.802384 containerd[1482]: time="2025-01-17T12:14:56.802308930Z" level=info msg="TearDown network for sandbox \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\" successfully" Jan 17 12:14:56.802384 containerd[1482]: time="2025-01-17T12:14:56.802351400Z" level=info msg="StopPodSandbox for \"295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117\" returns successfully" Jan 17 12:14:56.810139 containerd[1482]: time="2025-01-17T12:14:56.810032877Z" level=info msg="TearDown network for sandbox \"c681f5a5e68ef73d70d837b809846b7c018be6858f8d9453318449d9838e1ea6\" successfully" Jan 17 12:14:56.810139 containerd[1482]: time="2025-01-17T12:14:56.810064126Z" level=info msg="StopPodSandbox for \"c681f5a5e68ef73d70d837b809846b7c018be6858f8d9453318449d9838e1ea6\" returns successfully" Jan 17 12:14:56.977494 kubelet[2695]: I0117 12:14:56.976342 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-host-proc-sys-net\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.977494 kubelet[2695]: I0117 12:14:56.976438 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eccf5032-d2ef-4407-a870-5047d0da5d97-cilium-config-path\") pod \"eccf5032-d2ef-4407-a870-5047d0da5d97\" (UID: \"eccf5032-d2ef-4407-a870-5047d0da5d97\") " Jan 17 12:14:56.977494 kubelet[2695]: I0117 12:14:56.976491 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0453c5e-fa49-4d52-b94e-340e3fb505d0-hubble-tls\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.977494 kubelet[2695]: I0117 12:14:56.976533 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cilium-run\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.977494 kubelet[2695]: I0117 12:14:56.976549 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:14:56.977494 kubelet[2695]: I0117 12:14:56.976577 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hg29\" (UniqueName: \"kubernetes.io/projected/eccf5032-d2ef-4407-a870-5047d0da5d97-kube-api-access-9hg29\") pod \"eccf5032-d2ef-4407-a870-5047d0da5d97\" (UID: \"eccf5032-d2ef-4407-a870-5047d0da5d97\") " Jan 17 12:14:56.978137 kubelet[2695]: I0117 12:14:56.976717 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z47jr\" (UniqueName: \"kubernetes.io/projected/b0453c5e-fa49-4d52-b94e-340e3fb505d0-kube-api-access-z47jr\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.978137 kubelet[2695]: I0117 12:14:56.976768 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cni-path\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.978137 kubelet[2695]: I0117 12:14:56.976818 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0453c5e-fa49-4d52-b94e-340e3fb505d0-clustermesh-secrets\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.978137 kubelet[2695]: I0117 12:14:56.976857 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-lib-modules\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.978137 kubelet[2695]: I0117 12:14:56.976899 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-xtables-lock\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.978137 kubelet[2695]: I0117 12:14:56.976962 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cilium-config-path\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.978662 kubelet[2695]: I0117 12:14:56.977014 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-hostproc\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.978662 kubelet[2695]: I0117 12:14:56.977062 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-host-proc-sys-kernel\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.978662 kubelet[2695]: I0117 12:14:56.977102 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-bpf-maps\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.978662 kubelet[2695]: I0117 12:14:56.977194 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cilium-cgroup\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.978662 kubelet[2695]: I0117 12:14:56.977314 2695 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-etc-cni-netd\") pod \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\" (UID: \"b0453c5e-fa49-4d52-b94e-340e3fb505d0\") " Jan 17 12:14:56.978662 kubelet[2695]: I0117 12:14:56.977402 2695 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-host-proc-sys-net\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:56.980417 kubelet[2695]: I0117 12:14:56.977453 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:14:56.986126 kubelet[2695]: I0117 12:14:56.985341 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cni-path" (OuterVolumeSpecName: "cni-path") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:14:56.989149 kubelet[2695]: I0117 12:14:56.987438 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:14:56.989514 kubelet[2695]: I0117 12:14:56.987675 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eccf5032-d2ef-4407-a870-5047d0da5d97-kube-api-access-9hg29" (OuterVolumeSpecName: "kube-api-access-9hg29") pod "eccf5032-d2ef-4407-a870-5047d0da5d97" (UID: "eccf5032-d2ef-4407-a870-5047d0da5d97"). InnerVolumeSpecName "kube-api-access-9hg29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:14:56.991322 kubelet[2695]: I0117 12:14:56.990728 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-hostproc" (OuterVolumeSpecName: "hostproc") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:14:56.991322 kubelet[2695]: I0117 12:14:56.990828 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:14:56.991322 kubelet[2695]: I0117 12:14:56.990891 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:14:56.991322 kubelet[2695]: I0117 12:14:56.990931 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:14:56.992426 kubelet[2695]: I0117 12:14:56.992367 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:14:56.992658 kubelet[2695]: I0117 12:14:56.992620 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:14:56.999332 kubelet[2695]: I0117 12:14:56.998927 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0453c5e-fa49-4d52-b94e-340e3fb505d0-kube-api-access-z47jr" (OuterVolumeSpecName: "kube-api-access-z47jr") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "kube-api-access-z47jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:14:57.000369 kubelet[2695]: I0117 12:14:57.000187 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0453c5e-fa49-4d52-b94e-340e3fb505d0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:14:57.001942 kubelet[2695]: I0117 12:14:57.001855 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eccf5032-d2ef-4407-a870-5047d0da5d97-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eccf5032-d2ef-4407-a870-5047d0da5d97" (UID: "eccf5032-d2ef-4407-a870-5047d0da5d97"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:14:57.005045 kubelet[2695]: I0117 12:14:57.004947 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0453c5e-fa49-4d52-b94e-340e3fb505d0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:14:57.007871 kubelet[2695]: I0117 12:14:57.007794 2695 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b0453c5e-fa49-4d52-b94e-340e3fb505d0" (UID: "b0453c5e-fa49-4d52-b94e-340e3fb505d0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:14:57.042850 kubelet[2695]: I0117 12:14:57.042607 2695 scope.go:117] "RemoveContainer" containerID="f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7" Jan 17 12:14:57.049028 containerd[1482]: time="2025-01-17T12:14:57.048169807Z" level=info msg="RemoveContainer for \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\"" Jan 17 12:14:57.062730 systemd[1]: Removed slice kubepods-besteffort-podeccf5032_d2ef_4407_a870_5047d0da5d97.slice - libcontainer container kubepods-besteffort-podeccf5032_d2ef_4407_a870_5047d0da5d97.slice. Jan 17 12:14:57.104815 kubelet[2695]: I0117 12:14:57.078060 2695 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cilium-run\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.104815 kubelet[2695]: I0117 12:14:57.078113 2695 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eccf5032-d2ef-4407-a870-5047d0da5d97-cilium-config-path\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.104815 kubelet[2695]: I0117 12:14:57.078141 2695 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0453c5e-fa49-4d52-b94e-340e3fb505d0-hubble-tls\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.104815 kubelet[2695]: I0117 12:14:57.078167 2695 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-lib-modules\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.104815 kubelet[2695]: I0117 12:14:57.078192 2695 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9hg29\" (UniqueName: \"kubernetes.io/projected/eccf5032-d2ef-4407-a870-5047d0da5d97-kube-api-access-9hg29\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.104815 kubelet[2695]: I0117 12:14:57.078216 2695 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-z47jr\" (UniqueName: \"kubernetes.io/projected/b0453c5e-fa49-4d52-b94e-340e3fb505d0-kube-api-access-z47jr\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.104815 kubelet[2695]: I0117 12:14:57.078286 2695 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cni-path\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.105983 kubelet[2695]: I0117 12:14:57.078316 2695 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0453c5e-fa49-4d52-b94e-340e3fb505d0-clustermesh-secrets\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.105983 kubelet[2695]: I0117 12:14:57.078339 2695 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-xtables-lock\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.105983 kubelet[2695]: I0117 12:14:57.078363 2695 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-host-proc-sys-kernel\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.105983 kubelet[2695]: I0117 12:14:57.078391 2695 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cilium-config-path\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.105983 kubelet[2695]: I0117 12:14:57.078413 2695 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-hostproc\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.105983 kubelet[2695]: I0117 12:14:57.078437 2695 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-bpf-maps\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.105983 kubelet[2695]: I0117 12:14:57.078458 2695 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-cilium-cgroup\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.106573 kubelet[2695]: I0117 12:14:57.078483 2695 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0453c5e-fa49-4d52-b94e-340e3fb505d0-etc-cni-netd\") on node \"ci-4081-3-0-e-f0dad07f0f.novalocal\" DevicePath \"\"" Jan 17 12:14:57.114925 systemd[1]: Removed slice kubepods-burstable-podb0453c5e_fa49_4d52_b94e_340e3fb505d0.slice - libcontainer container kubepods-burstable-podb0453c5e_fa49_4d52_b94e_340e3fb505d0.slice. Jan 17 12:14:57.115175 systemd[1]: kubepods-burstable-podb0453c5e_fa49_4d52_b94e_340e3fb505d0.slice: Consumed 8.517s CPU time. Jan 17 12:14:57.357303 containerd[1482]: time="2025-01-17T12:14:57.355880559Z" level=info msg="RemoveContainer for \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\" returns successfully" Jan 17 12:14:57.357595 kubelet[2695]: I0117 12:14:57.357538 2695 scope.go:117] "RemoveContainer" containerID="f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7" Jan 17 12:14:57.359477 containerd[1482]: time="2025-01-17T12:14:57.359394238Z" level=error msg="ContainerStatus for \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\": not found" Jan 17 12:14:57.359757 kubelet[2695]: E0117 12:14:57.359685 2695 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\": not found" containerID="f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7" Jan 17 12:14:57.359923 kubelet[2695]: I0117 12:14:57.359753 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7"} err="failed to get container status \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3e6d5b1b899ec563acfc83aa2bc732bd1beb0cb3b156c35c45f39927031aaa7\": not found" Jan 17 12:14:57.359923 kubelet[2695]: I0117 12:14:57.359921 2695 scope.go:117] "RemoveContainer" containerID="5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b" Jan 17 12:14:57.368542 containerd[1482]: time="2025-01-17T12:14:57.368459323Z" level=info msg="RemoveContainer for \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\"" Jan 17 12:14:57.401386 containerd[1482]: time="2025-01-17T12:14:57.401299191Z" level=info msg="RemoveContainer for \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\" returns successfully" Jan 17 12:14:57.402098 kubelet[2695]: I0117 12:14:57.402021 2695 scope.go:117] "RemoveContainer" containerID="fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0" Jan 17 12:14:57.405806 containerd[1482]: time="2025-01-17T12:14:57.405514418Z" level=info msg="RemoveContainer for \"fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0\"" Jan 17 12:14:57.449947 containerd[1482]: time="2025-01-17T12:14:57.449758614Z" level=info msg="RemoveContainer for \"fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0\" returns successfully" Jan 17 12:14:57.450475 kubelet[2695]: I0117 12:14:57.450208 2695 scope.go:117] "RemoveContainer" containerID="a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c" Jan 17 12:14:57.452993 containerd[1482]: time="2025-01-17T12:14:57.452910925Z" level=info msg="RemoveContainer for \"a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c\"" Jan 17 12:14:57.491537 containerd[1482]: time="2025-01-17T12:14:57.491421142Z" level=info msg="RemoveContainer for \"a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c\" returns successfully" Jan 17 12:14:57.492321 kubelet[2695]: I0117 12:14:57.491992 2695 scope.go:117] "RemoveContainer" containerID="058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9" Jan 17 12:14:57.496921 containerd[1482]: time="2025-01-17T12:14:57.495766814Z" level=info msg="RemoveContainer for \"058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9\"" Jan 17 12:14:57.516494 kubelet[2695]: I0117 12:14:57.516416 2695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0453c5e-fa49-4d52-b94e-340e3fb505d0" path="/var/lib/kubelet/pods/b0453c5e-fa49-4d52-b94e-340e3fb505d0/volumes" Jan 17 12:14:57.518125 kubelet[2695]: I0117 12:14:57.518060 2695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eccf5032-d2ef-4407-a870-5047d0da5d97" path="/var/lib/kubelet/pods/eccf5032-d2ef-4407-a870-5047d0da5d97/volumes" Jan 17 12:14:57.558323 containerd[1482]: time="2025-01-17T12:14:57.558208578Z" level=info msg="RemoveContainer for \"058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9\" returns successfully" Jan 17 12:14:57.559344 kubelet[2695]: I0117 12:14:57.558922 2695 scope.go:117] "RemoveContainer" containerID="607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d" Jan 17 12:14:57.561848 containerd[1482]: time="2025-01-17T12:14:57.561616378Z" level=info msg="RemoveContainer for \"607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d\"" Jan 17 12:14:57.606584 containerd[1482]: time="2025-01-17T12:14:57.606380531Z" level=info msg="RemoveContainer for \"607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d\" returns successfully" Jan 17 12:14:57.607456 kubelet[2695]: I0117 12:14:57.607070 2695 scope.go:117] "RemoveContainer" containerID="5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b" Jan 17 12:14:57.611304 containerd[1482]: time="2025-01-17T12:14:57.607658280Z" level=error msg="ContainerStatus for \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\": not found" Jan 17 12:14:57.611705 kubelet[2695]: E0117 12:14:57.611516 2695 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\": not found" containerID="5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b" Jan 17 12:14:57.612069 kubelet[2695]: I0117 12:14:57.611646 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b"} err="failed to get container status \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c7f50acd989aa9153bf526930228fc843604ecef003288cbba3f69af5109d8b\": not found" Jan 17 12:14:57.612069 kubelet[2695]: I0117 12:14:57.611909 2695 scope.go:117] "RemoveContainer" containerID="fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0" Jan 17 12:14:57.612947 containerd[1482]: time="2025-01-17T12:14:57.612855913Z" level=error msg="ContainerStatus for \"fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0\": not found" Jan 17 12:14:57.613981 kubelet[2695]: E0117 12:14:57.613905 2695 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0\": not found" containerID="fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0" Jan 17 12:14:57.614400 kubelet[2695]: I0117 12:14:57.613992 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0"} err="failed to get container status \"fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdb753f72000f7ef923823f73884acb2f48dfd22c2ac45678f874dcdf3e875c0\": not found" Jan 17 12:14:57.614400 kubelet[2695]: I0117 12:14:57.614049 2695 scope.go:117] "RemoveContainer" containerID="a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c" Jan 17 12:14:57.615594 containerd[1482]: time="2025-01-17T12:14:57.615405491Z" level=error msg="ContainerStatus for \"a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c\": not found" Jan 17 12:14:57.615959 kubelet[2695]: E0117 12:14:57.615869 2695 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c\": not found" containerID="a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c" Jan 17 12:14:57.615959 kubelet[2695]: I0117 12:14:57.615939 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c"} err="failed to get container status \"a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2f1a86cca7ce66f992c5644b77cbf9d9773bf43b0a0c8aa79455174eb56b38c\": not found" Jan 17 12:14:57.615959 kubelet[2695]: I0117 12:14:57.615959 2695 scope.go:117] "RemoveContainer" containerID="058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9" Jan 17 12:14:57.616985 containerd[1482]: time="2025-01-17T12:14:57.616440484Z" level=error msg="ContainerStatus for \"058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9\": not found" Jan 17 12:14:57.616682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c681f5a5e68ef73d70d837b809846b7c018be6858f8d9453318449d9838e1ea6-rootfs.mount: Deactivated successfully. Jan 17 12:14:57.617822 kubelet[2695]: E0117 12:14:57.616696 2695 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9\": not found" containerID="058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9" Jan 17 12:14:57.617822 kubelet[2695]: I0117 12:14:57.616762 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9"} err="failed to get container status \"058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"058439a0aaeb88cef763b813579ede0792107a909d95eecfda907ce1735832b9\": not found" Jan 17 12:14:57.617822 kubelet[2695]: I0117 12:14:57.616811 2695 scope.go:117] "RemoveContainer" containerID="607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d" Jan 17 12:14:57.616862 systemd[1]: var-lib-kubelet-pods-eccf5032\x2dd2ef\x2d4407\x2da870\x2d5047d0da5d97-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9hg29.mount: Deactivated successfully. Jan 17 12:14:57.617006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-295b245042c83a07144ad7e2fb97663b542dc38678433b6807d174197691a117-rootfs.mount: Deactivated successfully. Jan 17 12:14:57.617080 systemd[1]: var-lib-kubelet-pods-b0453c5e\x2dfa49\x2d4d52\x2db94e\x2d340e3fb505d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz47jr.mount: Deactivated successfully. Jan 17 12:14:57.617150 systemd[1]: var-lib-kubelet-pods-b0453c5e\x2dfa49\x2d4d52\x2db94e\x2d340e3fb505d0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 12:14:57.617258 systemd[1]: var-lib-kubelet-pods-b0453c5e\x2dfa49\x2d4d52\x2db94e\x2d340e3fb505d0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 12:14:57.621327 containerd[1482]: time="2025-01-17T12:14:57.618944097Z" level=error msg="ContainerStatus for \"607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d\": not found" Jan 17 12:14:57.621998 kubelet[2695]: E0117 12:14:57.621912 2695 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d\": not found" containerID="607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d" Jan 17 12:14:57.621998 kubelet[2695]: I0117 12:14:57.621976 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d"} err="failed to get container status \"607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d\": rpc error: code = NotFound desc = an error occurred when try to find container \"607274d3700a002243d3e032ae2ce29770cb24eb917d3867ea8939b138866c3d\": not found" Jan 17 12:14:58.687149 sshd[4260]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:58.703215 systemd[1]: sshd@23-172.24.4.139:22-172.24.4.1:57376.service: Deactivated successfully. Jan 17 12:14:58.710590 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:14:58.711868 systemd[1]: session-26.scope: Consumed 1.370s CPU time. Jan 17 12:14:58.716875 systemd-logind[1464]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:14:58.725899 systemd[1]: Started sshd@24-172.24.4.139:22-172.24.4.1:50822.service - OpenSSH per-connection server daemon (172.24.4.1:50822). Jan 17 12:14:58.729987 systemd-logind[1464]: Removed session 26. Jan 17 12:15:00.184188 sshd[4424]: Accepted publickey for core from 172.24.4.1 port 50822 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:15:00.187028 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:00.199065 systemd-logind[1464]: New session 27 of user core. Jan 17 12:15:00.204549 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:15:01.350934 kubelet[2695]: I0117 12:15:01.350880 2695 topology_manager.go:215] "Topology Admit Handler" podUID="c56c1634-2bb2-478f-a545-8b80e76f4a97" podNamespace="kube-system" podName="cilium-jddx8" Jan 17 12:15:01.351410 kubelet[2695]: E0117 12:15:01.350957 2695 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0453c5e-fa49-4d52-b94e-340e3fb505d0" containerName="apply-sysctl-overwrites" Jan 17 12:15:01.351410 kubelet[2695]: E0117 12:15:01.350971 2695 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0453c5e-fa49-4d52-b94e-340e3fb505d0" containerName="mount-bpf-fs" Jan 17 12:15:01.351410 kubelet[2695]: E0117 12:15:01.350980 2695 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0453c5e-fa49-4d52-b94e-340e3fb505d0" containerName="cilium-agent" Jan 17 12:15:01.351410 kubelet[2695]: E0117 12:15:01.350990 2695 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eccf5032-d2ef-4407-a870-5047d0da5d97" containerName="cilium-operator" Jan 17 12:15:01.351410 kubelet[2695]: E0117 12:15:01.351002 2695 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0453c5e-fa49-4d52-b94e-340e3fb505d0" containerName="mount-cgroup" Jan 17 12:15:01.351410 kubelet[2695]: E0117 12:15:01.351014 2695 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0453c5e-fa49-4d52-b94e-340e3fb505d0" containerName="clean-cilium-state" Jan 17 12:15:01.351410 kubelet[2695]: I0117 12:15:01.351048 2695 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0453c5e-fa49-4d52-b94e-340e3fb505d0" containerName="cilium-agent" Jan 17 12:15:01.351410 kubelet[2695]: I0117 12:15:01.351058 2695 memory_manager.go:354] "RemoveStaleState removing state" podUID="eccf5032-d2ef-4407-a870-5047d0da5d97" containerName="cilium-operator" Jan 17 12:15:01.360426 systemd[1]: Created slice kubepods-burstable-podc56c1634_2bb2_478f_a545_8b80e76f4a97.slice - libcontainer container kubepods-burstable-podc56c1634_2bb2_478f_a545_8b80e76f4a97.slice. Jan 17 12:15:01.508008 kubelet[2695]: I0117 12:15:01.507911 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c56c1634-2bb2-478f-a545-8b80e76f4a97-hostproc\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508008 kubelet[2695]: I0117 12:15:01.507980 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c56c1634-2bb2-478f-a545-8b80e76f4a97-xtables-lock\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508185 kubelet[2695]: I0117 12:15:01.508018 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c56c1634-2bb2-478f-a545-8b80e76f4a97-host-proc-sys-kernel\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508185 kubelet[2695]: I0117 12:15:01.508055 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c56c1634-2bb2-478f-a545-8b80e76f4a97-lib-modules\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508185 kubelet[2695]: I0117 12:15:01.508091 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c56c1634-2bb2-478f-a545-8b80e76f4a97-cilium-config-path\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508185 kubelet[2695]: I0117 12:15:01.508128 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c56c1634-2bb2-478f-a545-8b80e76f4a97-cilium-ipsec-secrets\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508185 kubelet[2695]: I0117 12:15:01.508165 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c56c1634-2bb2-478f-a545-8b80e76f4a97-host-proc-sys-net\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508341 kubelet[2695]: I0117 12:15:01.508204 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c56c1634-2bb2-478f-a545-8b80e76f4a97-cni-path\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508341 kubelet[2695]: I0117 12:15:01.508269 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c56c1634-2bb2-478f-a545-8b80e76f4a97-cilium-run\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508341 kubelet[2695]: I0117 12:15:01.508332 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c56c1634-2bb2-478f-a545-8b80e76f4a97-clustermesh-secrets\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508420 kubelet[2695]: I0117 12:15:01.508369 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnf4n\" (UniqueName: \"kubernetes.io/projected/c56c1634-2bb2-478f-a545-8b80e76f4a97-kube-api-access-rnf4n\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508420 kubelet[2695]: I0117 12:15:01.508407 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c56c1634-2bb2-478f-a545-8b80e76f4a97-cilium-cgroup\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508476 kubelet[2695]: I0117 12:15:01.508441 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c56c1634-2bb2-478f-a545-8b80e76f4a97-etc-cni-netd\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508503 kubelet[2695]: I0117 12:15:01.508475 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c56c1634-2bb2-478f-a545-8b80e76f4a97-bpf-maps\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.508531 kubelet[2695]: I0117 12:15:01.508508 2695 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c56c1634-2bb2-478f-a545-8b80e76f4a97-hubble-tls\") pod \"cilium-jddx8\" (UID: \"c56c1634-2bb2-478f-a545-8b80e76f4a97\") " pod="kube-system/cilium-jddx8" Jan 17 12:15:01.557941 sshd[4424]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:01.569383 systemd[1]: sshd@24-172.24.4.139:22-172.24.4.1:50822.service: Deactivated successfully. Jan 17 12:15:01.572231 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:15:01.577353 systemd-logind[1464]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:15:01.587549 systemd[1]: Started sshd@25-172.24.4.139:22-172.24.4.1:50838.service - OpenSSH per-connection server daemon (172.24.4.1:50838). Jan 17 12:15:01.593467 systemd-logind[1464]: Removed session 27. Jan 17 12:15:01.636974 kubelet[2695]: E0117 12:15:01.632125 2695 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:15:01.966469 containerd[1482]: time="2025-01-17T12:15:01.966091566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jddx8,Uid:c56c1634-2bb2-478f-a545-8b80e76f4a97,Namespace:kube-system,Attempt:0,}" Jan 17 12:15:02.012655 containerd[1482]: time="2025-01-17T12:15:02.011972760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:15:02.012655 containerd[1482]: time="2025-01-17T12:15:02.012087645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:15:02.012655 containerd[1482]: time="2025-01-17T12:15:02.012127731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:02.012655 containerd[1482]: time="2025-01-17T12:15:02.012376448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:02.049417 systemd[1]: Started cri-containerd-0739a20192696d1fff99c1f5f6e7b627c1fbc9928d4d30c115313eefc8b0c773.scope - libcontainer container 0739a20192696d1fff99c1f5f6e7b627c1fbc9928d4d30c115313eefc8b0c773. Jan 17 12:15:02.078075 containerd[1482]: time="2025-01-17T12:15:02.077951094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jddx8,Uid:c56c1634-2bb2-478f-a545-8b80e76f4a97,Namespace:kube-system,Attempt:0,} returns sandbox id \"0739a20192696d1fff99c1f5f6e7b627c1fbc9928d4d30c115313eefc8b0c773\"" Jan 17 12:15:02.085805 containerd[1482]: time="2025-01-17T12:15:02.085511402Z" level=info msg="CreateContainer within sandbox \"0739a20192696d1fff99c1f5f6e7b627c1fbc9928d4d30c115313eefc8b0c773\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:15:02.099030 containerd[1482]: time="2025-01-17T12:15:02.098992780Z" level=info msg="CreateContainer within sandbox \"0739a20192696d1fff99c1f5f6e7b627c1fbc9928d4d30c115313eefc8b0c773\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"291895d267057c1ffea0581123741aebeb913a2abcc703607fcfbc3808f7f45b\"" Jan 17 12:15:02.100759 containerd[1482]: time="2025-01-17T12:15:02.100714072Z" level=info msg="StartContainer for \"291895d267057c1ffea0581123741aebeb913a2abcc703607fcfbc3808f7f45b\"" Jan 17 12:15:02.124385 systemd[1]: Started cri-containerd-291895d267057c1ffea0581123741aebeb913a2abcc703607fcfbc3808f7f45b.scope - libcontainer container 291895d267057c1ffea0581123741aebeb913a2abcc703607fcfbc3808f7f45b. Jan 17 12:15:02.153605 containerd[1482]: time="2025-01-17T12:15:02.153550608Z" level=info msg="StartContainer for \"291895d267057c1ffea0581123741aebeb913a2abcc703607fcfbc3808f7f45b\" returns successfully" Jan 17 12:15:02.160941 systemd[1]: cri-containerd-291895d267057c1ffea0581123741aebeb913a2abcc703607fcfbc3808f7f45b.scope: Deactivated successfully. Jan 17 12:15:02.202668 containerd[1482]: time="2025-01-17T12:15:02.202583078Z" level=info msg="shim disconnected" id=291895d267057c1ffea0581123741aebeb913a2abcc703607fcfbc3808f7f45b namespace=k8s.io Jan 17 12:15:02.202992 containerd[1482]: time="2025-01-17T12:15:02.202851402Z" level=warning msg="cleaning up after shim disconnected" id=291895d267057c1ffea0581123741aebeb913a2abcc703607fcfbc3808f7f45b namespace=k8s.io Jan 17 12:15:02.202992 containerd[1482]: time="2025-01-17T12:15:02.202869736Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:15:02.881962 sshd[4436]: Accepted publickey for core from 172.24.4.1 port 50838 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:15:02.885296 sshd[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:02.895476 systemd-logind[1464]: New session 28 of user core. Jan 17 12:15:02.901545 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 12:15:03.093288 containerd[1482]: time="2025-01-17T12:15:03.092632422Z" level=info msg="CreateContainer within sandbox \"0739a20192696d1fff99c1f5f6e7b627c1fbc9928d4d30c115313eefc8b0c773\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:15:03.125044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4184197930.mount: Deactivated successfully. Jan 17 12:15:03.126349 containerd[1482]: time="2025-01-17T12:15:03.126175022Z" level=info msg="CreateContainer within sandbox \"0739a20192696d1fff99c1f5f6e7b627c1fbc9928d4d30c115313eefc8b0c773\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5edd7ed59337dcfed656a31ef9a0f676e5f60ac358eb81175582f996664bc9d7\"" Jan 17 12:15:03.133140 containerd[1482]: time="2025-01-17T12:15:03.128646824Z" level=info msg="StartContainer for \"5edd7ed59337dcfed656a31ef9a0f676e5f60ac358eb81175582f996664bc9d7\"" Jan 17 12:15:03.189399 systemd[1]: Started cri-containerd-5edd7ed59337dcfed656a31ef9a0f676e5f60ac358eb81175582f996664bc9d7.scope - libcontainer container 5edd7ed59337dcfed656a31ef9a0f676e5f60ac358eb81175582f996664bc9d7. Jan 17 12:15:03.220729 containerd[1482]: time="2025-01-17T12:15:03.220609453Z" level=info msg="StartContainer for \"5edd7ed59337dcfed656a31ef9a0f676e5f60ac358eb81175582f996664bc9d7\" returns successfully" Jan 17 12:15:03.226694 systemd[1]: cri-containerd-5edd7ed59337dcfed656a31ef9a0f676e5f60ac358eb81175582f996664bc9d7.scope: Deactivated successfully. Jan 17 12:15:03.257604 containerd[1482]: time="2025-01-17T12:15:03.257395043Z" level=info msg="shim disconnected" id=5edd7ed59337dcfed656a31ef9a0f676e5f60ac358eb81175582f996664bc9d7 namespace=k8s.io Jan 17 12:15:03.257604 containerd[1482]: time="2025-01-17T12:15:03.257492016Z" level=warning msg="cleaning up after shim disconnected" id=5edd7ed59337dcfed656a31ef9a0f676e5f60ac358eb81175582f996664bc9d7 namespace=k8s.io Jan 17 12:15:03.257604 containerd[1482]: time="2025-01-17T12:15:03.257514348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:15:03.478218 sshd[4436]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:03.488143 systemd[1]: sshd@25-172.24.4.139:22-172.24.4.1:50838.service: Deactivated successfully. Jan 17 12:15:03.491213 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 12:15:03.492500 systemd-logind[1464]: Session 28 logged out. Waiting for processes to exit. Jan 17 12:15:03.502020 systemd[1]: Started sshd@26-172.24.4.139:22-172.24.4.1:50846.service - OpenSSH per-connection server daemon (172.24.4.1:50846). Jan 17 12:15:03.505992 systemd-logind[1464]: Removed session 28. Jan 17 12:15:03.630714 systemd[1]: run-containerd-runc-k8s.io-5edd7ed59337dcfed656a31ef9a0f676e5f60ac358eb81175582f996664bc9d7-runc.MAbQNp.mount: Deactivated successfully. Jan 17 12:15:03.630843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5edd7ed59337dcfed656a31ef9a0f676e5f60ac358eb81175582f996664bc9d7-rootfs.mount: Deactivated successfully. Jan 17 12:15:04.107663 containerd[1482]: time="2025-01-17T12:15:04.107561051Z" level=info msg="CreateContainer within sandbox \"0739a20192696d1fff99c1f5f6e7b627c1fbc9928d4d30c115313eefc8b0c773\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:15:04.150887 containerd[1482]: time="2025-01-17T12:15:04.150732813Z" level=info msg="CreateContainer within sandbox \"0739a20192696d1fff99c1f5f6e7b627c1fbc9928d4d30c115313eefc8b0c773\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f0721382d8e1fee153c70b176589201cbc41221476a20e2dbd929deb8e86e751\"" Jan 17 12:15:04.153488 containerd[1482]: time="2025-01-17T12:15:04.153150504Z" level=info msg="StartContainer for \"f0721382d8e1fee153c70b176589201cbc41221476a20e2dbd929deb8e86e751\"" Jan 17 12:15:04.196406 systemd[1]: Started cri-containerd-f0721382d8e1fee153c70b176589201cbc41221476a20e2dbd929deb8e86e751.scope - libcontainer container f0721382d8e1fee153c70b176589201cbc41221476a20e2dbd929deb8e86e751. Jan 17 12:15:04.231618 containerd[1482]: time="2025-01-17T12:15:04.230076533Z" level=info msg="StartContainer for \"f0721382d8e1fee153c70b176589201cbc41221476a20e2dbd929deb8e86e751\" returns successfully" Jan 17 12:15:04.230359 systemd[1]: cri-containerd-f0721382d8e1fee153c70b176589201cbc41221476a20e2dbd929deb8e86e751.scope: Deactivated successfully. Jan 17 12:15:04.261759 containerd[1482]: time="2025-01-17T12:15:04.261669691Z" level=info msg="shim disconnected" id=f0721382d8e1fee153c70b176589201cbc41221476a20e2dbd929deb8e86e751 namespace=k8s.io Jan 17 12:15:04.261759 containerd[1482]: time="2025-01-17T12:15:04.261744973Z" level=warning msg="cleaning up after shim disconnected" id=f0721382d8e1fee153c70b176589201cbc41221476a20e2dbd929deb8e86e751 namespace=k8s.io Jan 17 12:15:04.261759 containerd[1482]: time="2025-01-17T12:15:04.261756444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:15:04.440305 kubelet[2695]: I0117 12:15:04.440000 2695 setters.go:580] "Node became not ready" node="ci-4081-3-0-e-f0dad07f0f.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-17T12:15:04Z","lastTransitionTime":"2025-01-17T12:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 12:15:04.545105 sshd[4609]: Accepted publickey for core from 172.24.4.1 port 50846 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:15:04.548190 sshd[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:04.559132 systemd-logind[1464]: New session 29 of user core. Jan 17 12:15:04.570593 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 12:15:04.632860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0721382d8e1fee153c70b176589201cbc41221476a20e2dbd929deb8e86e751-rootfs.mount: Deactivated successfully. Jan 17 12:15:05.110500 containerd[1482]: time="2025-01-17T12:15:05.110406048Z" level=info msg="CreateContainer within sandbox \"0739a20192696d1fff99c1f5f6e7b627c1fbc9928d4d30c115313eefc8b0c773\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:15:05.264934 containerd[1482]: time="2025-01-17T12:15:05.264807915Z" level=info msg="CreateContainer within sandbox \"0739a20192696d1fff99c1f5f6e7b627c1fbc9928d4d30c115313eefc8b0c773\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3c230daaae1405f6198e2dc0f59da110a927b93fa892bfde2f83d2e9bea393f2\"" Jan 17 12:15:05.268262 containerd[1482]: time="2025-01-17T12:15:05.265877072Z" level=info msg="StartContainer for \"3c230daaae1405f6198e2dc0f59da110a927b93fa892bfde2f83d2e9bea393f2\"" Jan 17 12:15:05.320408 systemd[1]: Started cri-containerd-3c230daaae1405f6198e2dc0f59da110a927b93fa892bfde2f83d2e9bea393f2.scope - libcontainer container 3c230daaae1405f6198e2dc0f59da110a927b93fa892bfde2f83d2e9bea393f2. Jan 17 12:15:05.343419 systemd[1]: cri-containerd-3c230daaae1405f6198e2dc0f59da110a927b93fa892bfde2f83d2e9bea393f2.scope: Deactivated successfully. Jan 17 12:15:05.532450 containerd[1482]: time="2025-01-17T12:15:05.530982004Z" level=info msg="StartContainer for \"3c230daaae1405f6198e2dc0f59da110a927b93fa892bfde2f83d2e9bea393f2\" returns successfully" Jan 17 12:15:05.633341 systemd[1]: run-containerd-runc-k8s.io-3c230daaae1405f6198e2dc0f59da110a927b93fa892bfde2f83d2e9bea393f2-runc.TmfDj8.mount: Deactivated successfully. Jan 17 12:15:05.633823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c230daaae1405f6198e2dc0f59da110a927b93fa892bfde2f83d2e9bea393f2-rootfs.mount: Deactivated successfully. Jan 17 12:15:05.823156 containerd[1482]: time="2025-01-17T12:15:05.822818164Z" level=info msg="shim disconnected" id=3c230daaae1405f6198e2dc0f59da110a927b93fa892bfde2f83d2e9bea393f2 namespace=k8s.io Jan 17 12:15:05.823156 containerd[1482]: time="2025-01-17T12:15:05.822940073Z" level=warning msg="cleaning up after shim disconnected" id=3c230daaae1405f6198e2dc0f59da110a927b93fa892bfde2f83d2e9bea393f2 namespace=k8s.io Jan 17 12:15:05.823156 containerd[1482]: time="2025-01-17T12:15:05.822963868Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:15:06.121477 containerd[1482]: time="2025-01-17T12:15:06.121112859Z" level=info msg="CreateContainer within sandbox \"0739a20192696d1fff99c1f5f6e7b627c1fbc9928d4d30c115313eefc8b0c773\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:15:06.180437 containerd[1482]: time="2025-01-17T12:15:06.180086712Z" level=info msg="CreateContainer within sandbox \"0739a20192696d1fff99c1f5f6e7b627c1fbc9928d4d30c115313eefc8b0c773\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f50d98e9da93da2dcb8185fe142645d3359109f0baf0b2b1137c517e1a0a0f61\"" Jan 17 12:15:06.186288 containerd[1482]: time="2025-01-17T12:15:06.185046675Z" level=info msg="StartContainer for \"f50d98e9da93da2dcb8185fe142645d3359109f0baf0b2b1137c517e1a0a0f61\"" Jan 17 12:15:06.233396 systemd[1]: Started cri-containerd-f50d98e9da93da2dcb8185fe142645d3359109f0baf0b2b1137c517e1a0a0f61.scope - libcontainer container f50d98e9da93da2dcb8185fe142645d3359109f0baf0b2b1137c517e1a0a0f61. Jan 17 12:15:06.265768 containerd[1482]: time="2025-01-17T12:15:06.265694068Z" level=info msg="StartContainer for \"f50d98e9da93da2dcb8185fe142645d3359109f0baf0b2b1137c517e1a0a0f61\" returns successfully" Jan 17 12:15:06.648299 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:15:06.702294 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 17 12:15:07.604753 systemd[1]: run-containerd-runc-k8s.io-f50d98e9da93da2dcb8185fe142645d3359109f0baf0b2b1137c517e1a0a0f61-runc.guMKaN.mount: Deactivated successfully. Jan 17 12:15:07.650940 kubelet[2695]: E0117 12:15:07.650851 2695 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46672->127.0.0.1:34109: write tcp 127.0.0.1:46672->127.0.0.1:34109: write: broken pipe Jan 17 12:15:09.872064 systemd-networkd[1394]: lxc_health: Link UP Jan 17 12:15:09.903617 kubelet[2695]: E0117 12:15:09.902584 2695 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46688->127.0.0.1:34109: write tcp 127.0.0.1:46688->127.0.0.1:34109: write: broken pipe Jan 17 12:15:09.905185 systemd-networkd[1394]: lxc_health: Gained carrier Jan 17 12:15:09.993310 kubelet[2695]: I0117 12:15:09.991855 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jddx8" podStartSLOduration=8.991837724 podStartE2EDuration="8.991837724s" podCreationTimestamp="2025-01-17 12:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:15:07.162922074 +0000 UTC m=+145.829527093" watchObservedRunningTime="2025-01-17 12:15:09.991837724 +0000 UTC m=+148.658442703" Jan 17 12:15:11.043419 systemd-networkd[1394]: lxc_health: Gained IPv6LL Jan 17 12:15:12.027885 systemd[1]: run-containerd-runc-k8s.io-f50d98e9da93da2dcb8185fe142645d3359109f0baf0b2b1137c517e1a0a0f61-runc.TxStyO.mount: Deactivated successfully. Jan 17 12:15:14.300549 systemd[1]: run-containerd-runc-k8s.io-f50d98e9da93da2dcb8185fe142645d3359109f0baf0b2b1137c517e1a0a0f61-runc.3DTg9Y.mount: Deactivated successfully. Jan 17 12:15:16.751514 sshd[4609]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:16.757375 systemd[1]: sshd@26-172.24.4.139:22-172.24.4.1:50846.service: Deactivated successfully. Jan 17 12:15:16.761650 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 12:15:16.766666 systemd-logind[1464]: Session 29 logged out. Waiting for processes to exit. Jan 17 12:15:16.769911 systemd-logind[1464]: Removed session 29.