Jan 30 13:40:08.068494 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:40:08.068545 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:40:08.068565 kernel: BIOS-provided physical RAM map: Jan 30 13:40:08.068580 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:40:08.068595 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:40:08.068613 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:40:08.068630 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 30 13:40:08.068645 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 30 13:40:08.068660 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:40:08.068674 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:40:08.068690 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 30 13:40:08.068705 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 13:40:08.068720 kernel: NX (Execute Disable) protection: active Jan 30 13:40:08.068735 kernel: APIC: Static calls initialized Jan 30 13:40:08.068756 kernel: SMBIOS 3.0.0 present. Jan 30 13:40:08.068772 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 30 13:40:08.068787 kernel: Hypervisor detected: KVM Jan 30 13:40:08.068803 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:40:08.068818 kernel: kvm-clock: using sched offset of 3483302039 cycles Jan 30 13:40:08.068837 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:40:08.068854 kernel: tsc: Detected 1996.249 MHz processor Jan 30 13:40:08.068870 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:40:08.068887 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:40:08.068904 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 30 13:40:08.071802 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:40:08.071813 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:40:08.071822 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 30 13:40:08.071831 kernel: ACPI: Early table checksum verification disabled Jan 30 13:40:08.071843 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 30 13:40:08.071852 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:40:08.071861 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:40:08.071869 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:40:08.071878 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 30 13:40:08.071886 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:40:08.071895 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:40:08.071904 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 30 13:40:08.071929 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 30 13:40:08.071940 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 30 13:40:08.071949 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 30 13:40:08.071957 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 30 13:40:08.071970 kernel: No NUMA configuration found Jan 30 13:40:08.071979 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 30 13:40:08.071987 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 30 13:40:08.071997 kernel: Zone ranges: Jan 30 13:40:08.072008 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:40:08.072017 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:40:08.072026 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 30 13:40:08.072035 kernel: Movable zone start for each node Jan 30 13:40:08.072044 kernel: Early memory node ranges Jan 30 13:40:08.072053 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:40:08.072062 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 30 13:40:08.072071 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 30 13:40:08.072081 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 30 13:40:08.072090 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:40:08.072099 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:40:08.072108 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 30 13:40:08.072117 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:40:08.072126 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:40:08.072135 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:40:08.072144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:40:08.072153 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:40:08.072164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:40:08.072173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:40:08.072182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:40:08.072191 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:40:08.072200 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:40:08.072209 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:40:08.072218 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 30 13:40:08.072227 kernel: Booting paravirtualized kernel on KVM Jan 30 13:40:08.072236 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:40:08.072247 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:40:08.072256 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:40:08.072265 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:40:08.072274 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:40:08.072283 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 13:40:08.072293 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:40:08.072303 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:40:08.072312 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:40:08.072322 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:40:08.072331 kernel: Fallback order for Node 0: 0 Jan 30 13:40:08.072340 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 30 13:40:08.072349 kernel: Policy zone: Normal Jan 30 13:40:08.072358 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:40:08.072367 kernel: software IO TLB: area num 2. Jan 30 13:40:08.072376 kernel: Memory: 3964156K/4193772K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 229356K reserved, 0K cma-reserved) Jan 30 13:40:08.072385 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:40:08.072396 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:40:08.072405 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:40:08.072414 kernel: Dynamic Preempt: voluntary Jan 30 13:40:08.072422 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:40:08.072432 kernel: rcu: RCU event tracing is enabled. Jan 30 13:40:08.072442 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:40:08.072451 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:40:08.072460 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:40:08.072469 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:40:08.072478 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:40:08.072489 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:40:08.072497 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:40:08.072506 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:40:08.072515 kernel: Console: colour VGA+ 80x25 Jan 30 13:40:08.072524 kernel: printk: console [tty0] enabled Jan 30 13:40:08.072533 kernel: printk: console [ttyS0] enabled Jan 30 13:40:08.072542 kernel: ACPI: Core revision 20230628 Jan 30 13:40:08.072551 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:40:08.072560 kernel: x2apic enabled Jan 30 13:40:08.072571 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:40:08.072580 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:40:08.072589 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:40:08.072598 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 30 13:40:08.072606 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 13:40:08.072615 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 13:40:08.072624 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:40:08.072633 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:40:08.072642 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:40:08.072653 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:40:08.072661 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:40:08.072670 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 30 13:40:08.072679 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:40:08.072694 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:40:08.072705 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:40:08.072714 kernel: landlock: Up and running. Jan 30 13:40:08.072723 kernel: SELinux: Initializing. Jan 30 13:40:08.072733 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:40:08.072742 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:40:08.072751 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 30 13:40:08.072761 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:40:08.072773 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:40:08.072782 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:40:08.072792 kernel: Performance Events: AMD PMU driver. Jan 30 13:40:08.072801 kernel: ... version: 0 Jan 30 13:40:08.072812 kernel: ... bit width: 48 Jan 30 13:40:08.072821 kernel: ... generic registers: 4 Jan 30 13:40:08.072831 kernel: ... value mask: 0000ffffffffffff Jan 30 13:40:08.072840 kernel: ... max period: 00007fffffffffff Jan 30 13:40:08.072849 kernel: ... fixed-purpose events: 0 Jan 30 13:40:08.072858 kernel: ... event mask: 000000000000000f Jan 30 13:40:08.072867 kernel: signal: max sigframe size: 1440 Jan 30 13:40:08.072876 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:40:08.072886 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:40:08.072895 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:40:08.072921 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:40:08.072942 kernel: .... node #0, CPUs: #1 Jan 30 13:40:08.072952 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:40:08.072961 kernel: smpboot: Max logical packages: 2 Jan 30 13:40:08.072971 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 30 13:40:08.072980 kernel: devtmpfs: initialized Jan 30 13:40:08.072989 kernel: x86/mm: Memory block size: 128MB Jan 30 13:40:08.072999 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:40:08.073008 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:40:08.073020 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:40:08.073030 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:40:08.073039 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:40:08.073048 kernel: audit: type=2000 audit(1738244407.272:1): state=initialized audit_enabled=0 res=1 Jan 30 13:40:08.073058 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:40:08.073067 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:40:08.073076 kernel: cpuidle: using governor menu Jan 30 13:40:08.073086 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:40:08.073095 kernel: dca service started, version 1.12.1 Jan 30 13:40:08.073106 kernel: PCI: Using configuration type 1 for base access Jan 30 13:40:08.073115 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:40:08.073125 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:40:08.073134 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:40:08.073143 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:40:08.073152 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:40:08.073162 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:40:08.073171 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:40:08.073180 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:40:08.073191 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:40:08.073200 kernel: ACPI: Interpreter enabled Jan 30 13:40:08.073210 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:40:08.073219 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:40:08.073228 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:40:08.073238 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:40:08.073247 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 13:40:08.073256 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:40:08.073406 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:40:08.073509 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:40:08.073601 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:40:08.073616 kernel: acpiphp: Slot [3] registered Jan 30 13:40:08.073625 kernel: acpiphp: Slot [4] registered Jan 30 13:40:08.073635 kernel: acpiphp: Slot [5] registered Jan 30 13:40:08.073644 kernel: acpiphp: Slot [6] registered Jan 30 13:40:08.073653 kernel: acpiphp: Slot [7] registered Jan 30 13:40:08.073666 kernel: acpiphp: Slot [8] registered Jan 30 13:40:08.073675 kernel: acpiphp: Slot [9] registered Jan 30 13:40:08.073684 kernel: acpiphp: Slot [10] registered Jan 30 13:40:08.073694 kernel: acpiphp: Slot [11] registered Jan 30 13:40:08.073703 kernel: acpiphp: Slot [12] registered Jan 30 13:40:08.073712 kernel: acpiphp: Slot [13] registered Jan 30 13:40:08.073721 kernel: acpiphp: Slot [14] registered Jan 30 13:40:08.073730 kernel: acpiphp: Slot [15] registered Jan 30 13:40:08.073739 kernel: acpiphp: Slot [16] registered Jan 30 13:40:08.073748 kernel: acpiphp: Slot [17] registered Jan 30 13:40:08.073759 kernel: acpiphp: Slot [18] registered Jan 30 13:40:08.073768 kernel: acpiphp: Slot [19] registered Jan 30 13:40:08.073778 kernel: acpiphp: Slot [20] registered Jan 30 13:40:08.073787 kernel: acpiphp: Slot [21] registered Jan 30 13:40:08.073796 kernel: acpiphp: Slot [22] registered Jan 30 13:40:08.073805 kernel: acpiphp: Slot [23] registered Jan 30 13:40:08.073814 kernel: acpiphp: Slot [24] registered Jan 30 13:40:08.073823 kernel: acpiphp: Slot [25] registered Jan 30 13:40:08.073832 kernel: acpiphp: Slot [26] registered Jan 30 13:40:08.073843 kernel: acpiphp: Slot [27] registered Jan 30 13:40:08.073853 kernel: acpiphp: Slot [28] registered Jan 30 13:40:08.073862 kernel: acpiphp: Slot [29] registered Jan 30 13:40:08.073871 kernel: acpiphp: Slot [30] registered Jan 30 13:40:08.073880 kernel: acpiphp: Slot [31] registered Jan 30 13:40:08.073889 kernel: PCI host bridge to bus 0000:00 Jan 30 13:40:08.077130 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:40:08.077252 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:40:08.077344 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:40:08.077426 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:40:08.077508 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 30 13:40:08.077592 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:40:08.077699 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:40:08.077801 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:40:08.077929 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 13:40:08.078035 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 30 13:40:08.078125 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 13:40:08.078213 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 13:40:08.078303 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 13:40:08.078394 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 13:40:08.078493 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:40:08.078591 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 13:40:08.078682 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 13:40:08.078780 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 13:40:08.078874 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 13:40:08.080102 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 30 13:40:08.080199 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 30 13:40:08.080292 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 30 13:40:08.080388 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:40:08.080488 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:40:08.080581 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 30 13:40:08.080675 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 30 13:40:08.080768 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 30 13:40:08.080859 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 30 13:40:08.082072 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:40:08.082172 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:40:08.082261 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 30 13:40:08.082348 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 30 13:40:08.082442 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 13:40:08.082532 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 30 13:40:08.082620 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 30 13:40:08.082715 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:40:08.082810 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 30 13:40:08.082898 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 30 13:40:08.084555 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 30 13:40:08.084571 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:40:08.084582 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:40:08.084591 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:40:08.084601 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:40:08.084610 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:40:08.084624 kernel: iommu: Default domain type: Translated Jan 30 13:40:08.084634 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:40:08.084643 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:40:08.084652 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:40:08.084662 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:40:08.084671 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 30 13:40:08.084760 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 13:40:08.084850 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 13:40:08.084974 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:40:08.084993 kernel: vgaarb: loaded Jan 30 13:40:08.085003 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:40:08.085013 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:40:08.085023 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:40:08.085032 kernel: pnp: PnP ACPI init Jan 30 13:40:08.085129 kernel: pnp 00:03: [dma 2] Jan 30 13:40:08.085144 kernel: pnp: PnP ACPI: found 5 devices Jan 30 13:40:08.085154 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:40:08.085167 kernel: NET: Registered PF_INET protocol family Jan 30 13:40:08.085176 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:40:08.085186 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:40:08.085195 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:40:08.085205 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:40:08.085214 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:40:08.085224 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:40:08.085233 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:40:08.085243 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:40:08.085255 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:40:08.085264 kernel: NET: Registered PF_XDP protocol family Jan 30 13:40:08.085344 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:40:08.085424 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:40:08.085504 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:40:08.085583 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 30 13:40:08.085662 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 30 13:40:08.085755 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 13:40:08.085851 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:40:08.085866 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:40:08.085876 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:40:08.085885 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 30 13:40:08.085895 kernel: Initialise system trusted keyrings Jan 30 13:40:08.085904 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:40:08.086491 kernel: Key type asymmetric registered Jan 30 13:40:08.086501 kernel: Asymmetric key parser 'x509' registered Jan 30 13:40:08.086514 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:40:08.086524 kernel: io scheduler mq-deadline registered Jan 30 13:40:08.086533 kernel: io scheduler kyber registered Jan 30 13:40:08.086542 kernel: io scheduler bfq registered Jan 30 13:40:08.086552 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:40:08.086562 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 13:40:08.086571 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:40:08.086581 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:40:08.086590 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:40:08.086600 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:40:08.086612 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:40:08.086621 kernel: random: crng init done Jan 30 13:40:08.086631 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:40:08.086640 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:40:08.086649 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:40:08.086747 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:40:08.086832 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:40:08.086847 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:40:08.088037 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:40:07 UTC (1738244407) Jan 30 13:40:08.088126 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 30 13:40:08.088141 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:40:08.088151 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:40:08.088161 kernel: Segment Routing with IPv6 Jan 30 13:40:08.088170 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:40:08.088179 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:40:08.088189 kernel: Key type dns_resolver registered Jan 30 13:40:08.088202 kernel: IPI shorthand broadcast: enabled Jan 30 13:40:08.088212 kernel: sched_clock: Marking stable (1017007548, 166987582)->(1230896015, -46900885) Jan 30 13:40:08.088222 kernel: registered taskstats version 1 Jan 30 13:40:08.088231 kernel: Loading compiled-in X.509 certificates Jan 30 13:40:08.088241 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:40:08.088250 kernel: Key type .fscrypt registered Jan 30 13:40:08.088259 kernel: Key type fscrypt-provisioning registered Jan 30 13:40:08.088269 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:40:08.088278 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:40:08.088289 kernel: ima: No architecture policies found Jan 30 13:40:08.088299 kernel: clk: Disabling unused clocks Jan 30 13:40:08.088308 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:40:08.088318 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:40:08.088327 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:40:08.088336 kernel: Run /init as init process Jan 30 13:40:08.088346 kernel: with arguments: Jan 30 13:40:08.088355 kernel: /init Jan 30 13:40:08.088364 kernel: with environment: Jan 30 13:40:08.088375 kernel: HOME=/ Jan 30 13:40:08.088384 kernel: TERM=linux Jan 30 13:40:08.088394 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:40:08.088406 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:40:08.088418 systemd[1]: Detected virtualization kvm. Jan 30 13:40:08.088429 systemd[1]: Detected architecture x86-64. Jan 30 13:40:08.088439 systemd[1]: Running in initrd. Jan 30 13:40:08.088451 systemd[1]: No hostname configured, using default hostname. Jan 30 13:40:08.088461 systemd[1]: Hostname set to . Jan 30 13:40:08.088472 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:40:08.088482 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:40:08.088492 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:40:08.088502 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:40:08.088513 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:40:08.088533 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:40:08.088545 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:40:08.088556 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:40:08.088568 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:40:08.088579 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:40:08.088589 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:40:08.088602 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:40:08.088612 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:40:08.088623 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:40:08.088633 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:40:08.088643 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:40:08.088654 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:40:08.088664 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:40:08.088674 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:40:08.088684 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:40:08.088697 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:40:08.088707 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:40:08.088718 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:40:08.088728 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:40:08.088738 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:40:08.088749 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:40:08.088759 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:40:08.088769 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:40:08.088782 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:40:08.088792 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:40:08.088802 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:40:08.088813 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:40:08.088823 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:40:08.088834 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:40:08.088847 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:40:08.088875 systemd-journald[185]: Collecting audit messages is disabled. Jan 30 13:40:08.088919 systemd-journald[185]: Journal started Jan 30 13:40:08.088952 systemd-journald[185]: Runtime Journal (/run/log/journal/950301d5a510414baa288f53eb565f32) is 8.0M, max 78.3M, 70.3M free. Jan 30 13:40:08.086955 systemd-modules-load[186]: Inserted module 'overlay' Jan 30 13:40:08.108531 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:40:08.107250 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:40:08.118634 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:40:08.122568 systemd-modules-load[186]: Inserted module 'br_netfilter' Jan 30 13:40:08.123130 kernel: Bridge firewalling registered Jan 30 13:40:08.128082 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:40:08.130034 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:40:08.131567 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:40:08.134833 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:40:08.154134 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:40:08.156244 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:40:08.157084 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:40:08.157783 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:40:08.166550 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:40:08.170100 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:40:08.174160 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:40:08.184784 dracut-cmdline[216]: dracut-dracut-053 Jan 30 13:40:08.176475 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:40:08.187324 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:40:08.221531 systemd-resolved[222]: Positive Trust Anchors: Jan 30 13:40:08.221550 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:40:08.221593 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:40:08.224772 systemd-resolved[222]: Defaulting to hostname 'linux'. Jan 30 13:40:08.225776 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:40:08.228180 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:40:08.279965 kernel: SCSI subsystem initialized Jan 30 13:40:08.290966 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:40:08.303119 kernel: iscsi: registered transport (tcp) Jan 30 13:40:08.325073 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:40:08.325145 kernel: QLogic iSCSI HBA Driver Jan 30 13:40:08.386623 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:40:08.395161 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:40:08.431499 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:40:08.431541 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:40:08.432212 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:40:08.506034 kernel: raid6: sse2x4 gen() 6656 MB/s Jan 30 13:40:08.524018 kernel: raid6: sse2x2 gen() 15128 MB/s Jan 30 13:40:08.542410 kernel: raid6: sse2x1 gen() 10158 MB/s Jan 30 13:40:08.542474 kernel: raid6: using algorithm sse2x2 gen() 15128 MB/s Jan 30 13:40:08.561399 kernel: raid6: .... xor() 9426 MB/s, rmw enabled Jan 30 13:40:08.561463 kernel: raid6: using ssse3x2 recovery algorithm Jan 30 13:40:08.584457 kernel: xor: measuring software checksum speed Jan 30 13:40:08.584524 kernel: prefetch64-sse : 18510 MB/sec Jan 30 13:40:08.584986 kernel: generic_sse : 16838 MB/sec Jan 30 13:40:08.586160 kernel: xor: using function: prefetch64-sse (18510 MB/sec) Jan 30 13:40:08.758011 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:40:08.776270 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:40:08.787183 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:40:08.800790 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 30 13:40:08.805195 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:40:08.816213 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:40:08.831761 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Jan 30 13:40:08.881611 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:40:08.890188 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:40:08.964889 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:40:08.975888 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:40:09.024538 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:40:09.028249 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:40:09.031140 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:40:09.033305 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:40:09.040051 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:40:09.057256 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 30 13:40:09.082649 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 30 13:40:09.082771 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:40:09.082794 kernel: GPT:17805311 != 20971519 Jan 30 13:40:09.082807 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:40:09.082819 kernel: GPT:17805311 != 20971519 Jan 30 13:40:09.082830 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:40:09.082841 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:40:09.060170 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:40:09.076519 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:40:09.076641 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:40:09.090058 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:40:09.090544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:40:09.090679 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:40:09.091204 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:40:09.104955 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:40:09.114607 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (451) Jan 30 13:40:09.114634 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (460) Jan 30 13:40:09.139369 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:40:09.184694 kernel: libata version 3.00 loaded. Jan 30 13:40:09.184721 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 13:40:09.184874 kernel: scsi host0: ata_piix Jan 30 13:40:09.185025 kernel: scsi host1: ata_piix Jan 30 13:40:09.185136 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 30 13:40:09.185150 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 30 13:40:09.186182 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:40:09.192597 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:40:09.199211 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:40:09.207965 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:40:09.208554 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:40:09.220079 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:40:09.223317 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:40:09.235271 disk-uuid[504]: Primary Header is updated. Jan 30 13:40:09.235271 disk-uuid[504]: Secondary Entries is updated. Jan 30 13:40:09.235271 disk-uuid[504]: Secondary Header is updated. Jan 30 13:40:09.249214 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:40:09.258680 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:40:10.269036 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:40:10.269662 disk-uuid[505]: The operation has completed successfully. Jan 30 13:40:10.347784 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:40:10.348020 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:40:10.378036 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:40:10.386859 sh[525]: Success Jan 30 13:40:10.419046 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 30 13:40:10.498299 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:40:10.518395 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:40:10.520641 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:40:10.566603 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:40:10.566637 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:40:10.576468 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:40:10.576496 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:40:10.580219 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:40:10.600703 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:40:10.602863 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:40:10.608189 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:40:10.611471 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:40:10.629885 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:40:10.629939 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:40:10.629955 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:40:10.635940 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:40:10.647937 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:40:10.647992 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:40:10.663757 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:40:10.672186 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:40:10.748742 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:40:10.759344 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:40:10.780312 systemd-networkd[708]: lo: Link UP Jan 30 13:40:10.780320 systemd-networkd[708]: lo: Gained carrier Jan 30 13:40:10.781465 systemd-networkd[708]: Enumeration completed Jan 30 13:40:10.781545 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:40:10.783062 systemd[1]: Reached target network.target - Network. Jan 30 13:40:10.783157 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:40:10.783161 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:40:10.785125 systemd-networkd[708]: eth0: Link UP Jan 30 13:40:10.785129 systemd-networkd[708]: eth0: Gained carrier Jan 30 13:40:10.785138 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:40:10.798963 systemd-networkd[708]: eth0: DHCPv4 address 172.24.4.90/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 13:40:10.837625 ignition[614]: Ignition 2.20.0 Jan 30 13:40:10.837639 ignition[614]: Stage: fetch-offline Jan 30 13:40:10.837681 ignition[614]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:40:10.839278 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:40:10.837692 ignition[614]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 13:40:10.837788 ignition[614]: parsed url from cmdline: "" Jan 30 13:40:10.837792 ignition[614]: no config URL provided Jan 30 13:40:10.837798 ignition[614]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:40:10.837808 ignition[614]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:40:10.837815 ignition[614]: failed to fetch config: resource requires networking Jan 30 13:40:10.838043 ignition[614]: Ignition finished successfully Jan 30 13:40:10.846103 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:40:10.859202 ignition[719]: Ignition 2.20.0 Jan 30 13:40:10.859215 ignition[719]: Stage: fetch Jan 30 13:40:10.859427 ignition[719]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:40:10.859439 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 13:40:10.859547 ignition[719]: parsed url from cmdline: "" Jan 30 13:40:10.859551 ignition[719]: no config URL provided Jan 30 13:40:10.859556 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:40:10.859566 ignition[719]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:40:10.859667 ignition[719]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 13:40:10.859720 ignition[719]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 13:40:10.859760 ignition[719]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 13:40:11.058312 ignition[719]: GET result: OK Jan 30 13:40:11.059383 ignition[719]: parsing config with SHA512: 5373711a4c6bd31f592ef90920b99e1f4859ed1cb81edfad5e00a23d655b0d3c8d9ff2b21fa93538dbe160f6a5e360be669a415d2696916a976aeec66c65ca81 Jan 30 13:40:11.070077 unknown[719]: fetched base config from "system" Jan 30 13:40:11.070104 unknown[719]: fetched base config from "system" Jan 30 13:40:11.070118 unknown[719]: fetched user config from "openstack" Jan 30 13:40:11.072045 ignition[719]: fetch: fetch complete Jan 30 13:40:11.072058 ignition[719]: fetch: fetch passed Jan 30 13:40:11.075760 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:40:11.072158 ignition[719]: Ignition finished successfully Jan 30 13:40:11.097373 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:40:11.127492 ignition[726]: Ignition 2.20.0 Jan 30 13:40:11.127520 ignition[726]: Stage: kargs Jan 30 13:40:11.127970 ignition[726]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:40:11.127999 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 13:40:11.132657 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:40:11.130313 ignition[726]: kargs: kargs passed Jan 30 13:40:11.130420 ignition[726]: Ignition finished successfully Jan 30 13:40:11.144286 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:40:11.179147 ignition[732]: Ignition 2.20.0 Jan 30 13:40:11.180818 ignition[732]: Stage: disks Jan 30 13:40:11.181297 ignition[732]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:40:11.181323 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 13:40:11.187517 ignition[732]: disks: disks passed Jan 30 13:40:11.188537 ignition[732]: Ignition finished successfully Jan 30 13:40:11.191185 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:40:11.192750 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:40:11.193319 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:40:11.193939 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:40:11.196190 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:40:11.198406 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:40:11.206101 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:40:11.232072 systemd-fsck[740]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 13:40:11.245083 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:40:11.255166 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:40:11.429574 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:40:11.429929 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:40:11.430985 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:40:11.440204 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:40:11.444848 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:40:11.446463 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:40:11.449065 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 13:40:11.450600 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:40:11.450633 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:40:11.460975 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (748) Jan 30 13:40:11.471968 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:40:11.474371 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:40:11.487116 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:40:11.487139 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:40:11.491063 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:40:11.502685 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:40:11.506281 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:40:11.621946 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:40:11.630142 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:40:11.636797 initrd-setup-root[790]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:40:11.644788 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:40:11.744741 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:40:11.751010 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:40:11.753489 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:40:11.760816 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:40:11.763942 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:40:11.796151 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:40:11.798540 ignition[865]: INFO : Ignition 2.20.0 Jan 30 13:40:11.800031 ignition[865]: INFO : Stage: mount Jan 30 13:40:11.800031 ignition[865]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:40:11.800031 ignition[865]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 13:40:11.803261 ignition[865]: INFO : mount: mount passed Jan 30 13:40:11.803261 ignition[865]: INFO : Ignition finished successfully Jan 30 13:40:11.801817 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:40:12.109384 systemd-networkd[708]: eth0: Gained IPv6LL Jan 30 13:40:18.722498 coreos-metadata[750]: Jan 30 13:40:18.720 WARN failed to locate config-drive, using the metadata service API instead Jan 30 13:40:18.761983 coreos-metadata[750]: Jan 30 13:40:18.761 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 13:40:18.777609 coreos-metadata[750]: Jan 30 13:40:18.777 INFO Fetch successful Jan 30 13:40:18.779036 coreos-metadata[750]: Jan 30 13:40:18.777 INFO wrote hostname ci-4186-1-0-f-d1cd2b53be.novalocal to /sysroot/etc/hostname Jan 30 13:40:18.781476 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 13:40:18.781838 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 13:40:18.794069 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:40:18.826222 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:40:18.858034 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (882) Jan 30 13:40:18.866530 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:40:18.866598 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:40:18.871006 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:40:18.876987 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:40:18.882082 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:40:18.917357 ignition[900]: INFO : Ignition 2.20.0 Jan 30 13:40:18.918770 ignition[900]: INFO : Stage: files Jan 30 13:40:18.918770 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:40:18.918770 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 13:40:18.922191 ignition[900]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:40:18.923487 ignition[900]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:40:18.923487 ignition[900]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:40:18.929381 ignition[900]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:40:18.930752 ignition[900]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:40:18.932152 ignition[900]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:40:18.931119 unknown[900]: wrote ssh authorized keys file for user: core Jan 30 13:40:18.935778 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:40:18.937701 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:40:18.996979 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:40:19.281950 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:40:19.281950 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:40:19.285382 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 13:40:19.852056 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:40:20.279972 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:40:20.781837 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:40:22.328717 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:40:22.328717 ignition[900]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:40:22.334559 ignition[900]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:40:22.334559 ignition[900]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:40:22.334559 ignition[900]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:40:22.334559 ignition[900]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:40:22.334559 ignition[900]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:40:22.334559 ignition[900]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:40:22.334559 ignition[900]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:40:22.334559 ignition[900]: INFO : files: files passed Jan 30 13:40:22.334559 ignition[900]: INFO : Ignition finished successfully Jan 30 13:40:22.334032 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:40:22.354441 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:40:22.364165 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:40:22.371252 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:40:22.371350 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:40:22.382455 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:40:22.382455 initrd-setup-root-after-ignition[928]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:40:22.386445 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:40:22.386645 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:40:22.389059 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:40:22.398096 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:40:22.421791 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:40:22.422538 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:40:22.424206 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:40:22.424740 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:40:22.426005 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:40:22.428482 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:40:22.451949 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:40:22.460057 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:40:22.470601 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:40:22.471348 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:40:22.472621 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:40:22.473843 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:40:22.473993 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:40:22.475191 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:40:22.475882 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:40:22.477056 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:40:22.478091 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:40:22.479114 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:40:22.480257 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:40:22.481397 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:40:22.482612 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:40:22.483830 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:40:22.485087 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:40:22.486601 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:40:22.486718 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:40:22.487956 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:40:22.488709 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:40:22.489692 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:40:22.489790 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:40:22.490813 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:40:22.490944 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:40:22.492479 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:40:22.492601 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:40:22.493263 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:40:22.493371 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:40:22.500472 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:40:22.501602 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:40:22.501799 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:40:22.507130 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:40:22.508242 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:40:22.508389 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:40:22.509048 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:40:22.509170 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:40:22.515147 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:40:22.515245 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:40:22.523229 ignition[952]: INFO : Ignition 2.20.0 Jan 30 13:40:22.523229 ignition[952]: INFO : Stage: umount Jan 30 13:40:22.523229 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:40:22.523229 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 13:40:22.526859 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:40:22.529801 ignition[952]: INFO : umount: umount passed Jan 30 13:40:22.529801 ignition[952]: INFO : Ignition finished successfully Jan 30 13:40:22.527005 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:40:22.528276 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:40:22.528354 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:40:22.529152 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:40:22.529192 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:40:22.531253 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:40:22.531294 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:40:22.533517 systemd[1]: Stopped target network.target - Network. Jan 30 13:40:22.534561 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:40:22.534620 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:40:22.535693 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:40:22.536681 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:40:22.540085 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:40:22.541000 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:40:22.541578 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:40:22.544016 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:40:22.544065 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:40:22.545283 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:40:22.545318 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:40:22.545810 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:40:22.545859 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:40:22.546891 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:40:22.546954 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:40:22.548622 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:40:22.549821 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:40:22.552995 systemd-networkd[708]: eth0: DHCPv6 lease lost Jan 30 13:40:22.554470 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:40:22.554763 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:40:22.556177 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:40:22.556246 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:40:22.563313 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:40:22.563828 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:40:22.563884 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:40:22.564543 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:40:22.565378 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:40:22.565480 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:40:22.570632 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:40:22.570709 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:40:22.575775 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:40:22.575838 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:40:22.576420 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:40:22.576462 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:40:22.578839 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:40:22.579166 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:40:22.582463 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:40:22.582621 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:40:22.587579 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:40:22.587640 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:40:22.588228 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:40:22.588263 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:40:22.591044 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:40:22.591088 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:40:22.591659 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:40:22.591702 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:40:22.593430 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:40:22.593472 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:40:22.603071 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:40:22.606347 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:40:22.606404 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:40:22.606972 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:40:22.607012 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:40:22.610817 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:40:22.610942 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:40:22.619038 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:40:22.621320 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:40:22.621426 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:40:22.623063 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:40:22.623647 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:40:22.623696 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:40:22.635110 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:40:22.643831 systemd[1]: Switching root. Jan 30 13:40:22.684509 systemd-journald[185]: Journal stopped Jan 30 13:40:24.532180 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jan 30 13:40:24.532251 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:40:24.532271 kernel: SELinux: policy capability open_perms=1 Jan 30 13:40:24.532287 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:40:24.532299 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:40:24.532311 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:40:24.532323 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:40:24.532335 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:40:24.532346 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:40:24.532358 kernel: audit: type=1403 audit(1738244423.267:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:40:24.532371 systemd[1]: Successfully loaded SELinux policy in 86.048ms. Jan 30 13:40:24.532405 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.919ms. Jan 30 13:40:24.532421 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:40:24.532435 systemd[1]: Detected virtualization kvm. Jan 30 13:40:24.532448 systemd[1]: Detected architecture x86-64. Jan 30 13:40:24.532466 systemd[1]: Detected first boot. Jan 30 13:40:24.532479 systemd[1]: Hostname set to . Jan 30 13:40:24.532492 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:40:24.532504 zram_generator::config[994]: No configuration found. Jan 30 13:40:24.532519 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:40:24.532534 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:40:24.532547 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:40:24.532560 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:40:24.532574 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:40:24.532587 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:40:24.532600 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:40:24.532612 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:40:24.532625 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:40:24.532641 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:40:24.532654 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:40:24.532666 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:40:24.532678 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:40:24.532691 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:40:24.532704 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:40:24.532717 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:40:24.532730 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:40:24.532743 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:40:24.532758 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:40:24.532770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:40:24.532783 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:40:24.532796 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:40:24.532812 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:40:24.532825 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:40:24.532840 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:40:24.532853 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:40:24.532866 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:40:24.532893 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:40:24.534020 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:40:24.534039 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:40:24.534053 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:40:24.534066 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:40:24.534078 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:40:24.534091 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:40:24.534108 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:40:24.534121 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:40:24.534133 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:40:24.534146 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:40:24.534159 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:40:24.534172 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:40:24.534184 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:40:24.534198 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:40:24.534212 systemd[1]: Reached target machines.target - Containers. Jan 30 13:40:24.534225 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:40:24.534238 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:40:24.534250 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:40:24.534264 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:40:24.534277 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:40:24.534289 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:40:24.534301 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:40:24.534316 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:40:24.534328 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:40:24.534341 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:40:24.534354 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:40:24.534366 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:40:24.534378 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:40:24.534391 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:40:24.534403 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:40:24.534416 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:40:24.534431 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:40:24.534443 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:40:24.534456 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:40:24.534470 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:40:24.534482 systemd[1]: Stopped verity-setup.service. Jan 30 13:40:24.534495 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:40:24.534507 kernel: ACPI: bus type drm_connector registered Jan 30 13:40:24.534520 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:40:24.534532 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:40:24.534546 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:40:24.534559 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:40:24.534572 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:40:24.534584 kernel: fuse: init (API version 7.39) Jan 30 13:40:24.534597 kernel: loop: module loaded Jan 30 13:40:24.534609 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:40:24.534621 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:40:24.534634 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:40:24.534646 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:40:24.534658 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:40:24.534671 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:40:24.534683 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:40:24.534696 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:40:24.534711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:40:24.534723 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:40:24.534752 systemd-journald[1087]: Collecting audit messages is disabled. Jan 30 13:40:24.534777 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:40:24.534793 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:40:24.534807 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:40:24.534819 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:40:24.534832 systemd-journald[1087]: Journal started Jan 30 13:40:24.534861 systemd-journald[1087]: Runtime Journal (/run/log/journal/950301d5a510414baa288f53eb565f32) is 8.0M, max 78.3M, 70.3M free. Jan 30 13:40:24.130983 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:40:24.157062 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:40:24.157464 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:40:24.538941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:40:24.538970 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:40:24.540637 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:40:24.541430 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:40:24.542232 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:40:24.553725 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:40:24.562653 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:40:24.570055 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:40:24.570633 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:40:24.570674 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:40:24.574486 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:40:24.581112 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:40:24.582900 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:40:24.583612 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:40:24.585728 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:40:24.590514 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:40:24.591224 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:40:24.594584 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:40:24.595659 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:40:24.598699 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:40:24.606812 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:40:24.608600 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:40:24.612047 systemd-journald[1087]: Time spent on flushing to /var/log/journal/950301d5a510414baa288f53eb565f32 is 54.830ms for 944 entries. Jan 30 13:40:24.612047 systemd-journald[1087]: System Journal (/var/log/journal/950301d5a510414baa288f53eb565f32) is 8.0M, max 584.8M, 576.8M free. Jan 30 13:40:24.703487 systemd-journald[1087]: Received client request to flush runtime journal. Jan 30 13:40:24.703540 kernel: loop0: detected capacity change from 0 to 141000 Jan 30 13:40:24.612573 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:40:24.616172 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:40:24.616746 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:40:24.617649 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:40:24.629112 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:40:24.641262 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:40:24.642175 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:40:24.648165 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:40:24.667598 udevadm[1133]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:40:24.693403 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:40:24.709614 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:40:24.801556 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:40:24.804947 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:40:24.850681 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:40:24.865264 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:40:24.882940 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:40:24.908979 kernel: loop1: detected capacity change from 0 to 138184 Jan 30 13:40:24.911652 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Jan 30 13:40:24.911672 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Jan 30 13:40:24.919034 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:40:24.985940 kernel: loop2: detected capacity change from 0 to 8 Jan 30 13:40:25.006216 kernel: loop3: detected capacity change from 0 to 205544 Jan 30 13:40:25.095820 kernel: loop4: detected capacity change from 0 to 141000 Jan 30 13:40:25.137126 kernel: loop5: detected capacity change from 0 to 138184 Jan 30 13:40:25.218966 kernel: loop6: detected capacity change from 0 to 8 Jan 30 13:40:25.224006 kernel: loop7: detected capacity change from 0 to 205544 Jan 30 13:40:25.294984 (sd-merge)[1152]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 13:40:25.295854 (sd-merge)[1152]: Merged extensions into '/usr'. Jan 30 13:40:25.311456 systemd[1]: Reloading requested from client PID 1127 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:40:25.311645 systemd[1]: Reloading... Jan 30 13:40:25.413969 zram_generator::config[1174]: No configuration found. Jan 30 13:40:25.633051 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:40:25.697181 systemd[1]: Reloading finished in 385 ms. Jan 30 13:40:25.718975 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:40:25.720012 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:40:25.730245 systemd[1]: Starting ensure-sysext.service... Jan 30 13:40:25.734817 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:40:25.742169 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:40:25.753606 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:40:25.753624 systemd[1]: Reloading... Jan 30 13:40:25.777895 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Jan 30 13:40:25.788515 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:40:25.788820 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:40:25.789770 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:40:25.792317 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 30 13:40:25.792396 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 30 13:40:25.800895 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:40:25.800922 systemd-tmpfiles[1235]: Skipping /boot Jan 30 13:40:25.827693 ldconfig[1122]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:40:25.828563 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:40:25.828582 systemd-tmpfiles[1235]: Skipping /boot Jan 30 13:40:25.871999 zram_generator::config[1268]: No configuration found. Jan 30 13:40:26.017006 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1265) Jan 30 13:40:26.061087 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 13:40:26.070936 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:40:26.087977 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:40:26.118035 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:40:26.118117 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:40:26.147321 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:40:26.169939 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 13:40:26.171934 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 13:40:26.177484 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:40:26.177526 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 13:40:26.177547 kernel: [drm] features: -context_init Jan 30 13:40:26.179402 kernel: [drm] number of scanouts: 1 Jan 30 13:40:26.179440 kernel: [drm] number of cap sets: 0 Jan 30 13:40:26.183944 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 13:40:26.187937 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 13:40:26.188020 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 13:40:26.196447 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:40:26.198077 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 13:40:26.202208 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:40:26.202450 systemd[1]: Reloading finished in 448 ms. Jan 30 13:40:26.217695 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:40:26.220046 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:40:26.226320 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:40:26.251173 systemd[1]: Finished ensure-sysext.service. Jan 30 13:40:26.264285 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:40:26.267338 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:40:26.272045 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:40:26.274641 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:40:26.274998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:40:26.277067 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:40:26.281080 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:40:26.283160 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:40:26.287083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:40:26.290064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:40:26.291064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:40:26.293098 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:40:26.296264 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:40:26.310364 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:40:26.313980 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:40:26.326543 lvm[1357]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:40:26.327291 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:40:26.333281 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:40:26.340134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:40:26.340273 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:40:26.350787 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:40:26.363473 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:40:26.364668 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:40:26.379155 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:40:26.407603 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:40:26.408531 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:40:26.410044 lvm[1378]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:40:26.414203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:40:26.414371 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:40:26.415376 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:40:26.415503 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:40:26.419021 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:40:26.427512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:40:26.427703 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:40:26.440631 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:40:26.445750 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:40:26.445992 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:40:26.450285 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:40:26.454346 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:40:26.497498 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:40:26.510122 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:40:26.516121 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:40:26.520366 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:40:26.528062 augenrules[1410]: No rules Jan 30 13:40:26.530411 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:40:26.530621 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:40:26.544675 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:40:26.571104 systemd-networkd[1367]: lo: Link UP Jan 30 13:40:26.571114 systemd-networkd[1367]: lo: Gained carrier Jan 30 13:40:26.572574 systemd-networkd[1367]: Enumeration completed Jan 30 13:40:26.572667 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:40:26.575982 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:40:26.575992 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:40:26.577009 systemd-networkd[1367]: eth0: Link UP Jan 30 13:40:26.577018 systemd-networkd[1367]: eth0: Gained carrier Jan 30 13:40:26.577034 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:40:26.580098 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:40:26.591048 systemd-networkd[1367]: eth0: DHCPv4 address 172.24.4.90/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 13:40:26.593533 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:40:26.593784 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Jan 30 13:40:26.594301 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:40:26.607342 systemd-resolved[1368]: Positive Trust Anchors: Jan 30 13:40:26.607359 systemd-resolved[1368]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:40:26.607405 systemd-resolved[1368]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:40:26.614041 systemd-resolved[1368]: Using system hostname 'ci-4186-1-0-f-d1cd2b53be.novalocal'. Jan 30 13:40:26.616004 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:40:26.621948 systemd[1]: Reached target network.target - Network. Jan 30 13:40:26.622422 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:40:26.628248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:40:26.632092 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:40:26.632751 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:40:26.633308 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:40:26.633955 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:40:26.634493 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:40:26.635405 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:40:26.637647 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:40:26.637745 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:40:27.352910 systemd-resolved[1368]: Clock change detected. Flushing caches. Jan 30 13:40:27.353058 systemd-timesyncd[1369]: Contacted time server 217.182.137.208:123 (0.flatcar.pool.ntp.org). Jan 30 13:40:27.353133 systemd-timesyncd[1369]: Initial clock synchronization to Thu 2025-01-30 13:40:27.352837 UTC. Jan 30 13:40:27.353387 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:40:27.357583 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:40:27.363773 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:40:27.371541 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:40:27.377146 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:40:27.379243 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:40:27.381291 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:40:27.383412 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:40:27.383512 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:40:27.391057 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:40:27.397211 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:40:27.406314 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:40:27.418085 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:40:27.421889 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:40:27.425658 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:40:27.429432 jq[1430]: false Jan 30 13:40:27.432114 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:40:27.440068 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:40:27.446012 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:40:27.454147 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:40:27.463144 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:40:27.465621 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:40:27.467239 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:40:27.472767 extend-filesystems[1431]: Found loop4 Jan 30 13:40:27.477865 extend-filesystems[1431]: Found loop5 Jan 30 13:40:27.477865 extend-filesystems[1431]: Found loop6 Jan 30 13:40:27.477865 extend-filesystems[1431]: Found loop7 Jan 30 13:40:27.477865 extend-filesystems[1431]: Found vda Jan 30 13:40:27.477865 extend-filesystems[1431]: Found vda1 Jan 30 13:40:27.473319 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:40:27.512404 extend-filesystems[1431]: Found vda2 Jan 30 13:40:27.512404 extend-filesystems[1431]: Found vda3 Jan 30 13:40:27.512404 extend-filesystems[1431]: Found usr Jan 30 13:40:27.512404 extend-filesystems[1431]: Found vda4 Jan 30 13:40:27.512404 extend-filesystems[1431]: Found vda6 Jan 30 13:40:27.512404 extend-filesystems[1431]: Found vda7 Jan 30 13:40:27.512404 extend-filesystems[1431]: Found vda9 Jan 30 13:40:27.512404 extend-filesystems[1431]: Checking size of /dev/vda9 Jan 30 13:40:27.603048 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1271) Jan 30 13:40:27.603130 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 30 13:40:27.603148 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 30 13:40:27.497472 dbus-daemon[1427]: [system] SELinux support is enabled Jan 30 13:40:27.494943 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:40:27.603547 extend-filesystems[1431]: Resized partition /dev/vda9 Jan 30 13:40:27.607229 update_engine[1441]: I20250130 13:40:27.517742 1441 main.cc:92] Flatcar Update Engine starting Jan 30 13:40:27.607229 update_engine[1441]: I20250130 13:40:27.538861 1441 update_check_scheduler.cc:74] Next update check in 11m17s Jan 30 13:40:27.515510 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:40:27.609504 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:40:27.552645 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:40:27.615260 jq[1448]: true Jan 30 13:40:27.615477 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:40:27.615477 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:40:27.615477 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 30 13:40:27.552872 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:40:27.623094 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Jan 30 13:40:27.553202 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:40:27.553348 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:40:27.568270 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:40:27.627063 jq[1456]: true Jan 30 13:40:27.568436 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:40:27.633519 tar[1455]: linux-amd64/helm Jan 30 13:40:27.585758 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:40:27.586525 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:40:27.594430 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:40:27.594460 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:40:27.596190 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:40:27.596219 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:40:27.599414 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:40:27.614704 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:40:27.614963 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:40:27.697787 systemd-logind[1440]: New seat seat0. Jan 30 13:40:27.701846 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:40:27.701871 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:40:27.703310 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:40:27.782794 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:40:27.786129 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:40:27.798326 systemd[1]: Starting sshkeys.service... Jan 30 13:40:27.827229 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:40:27.844256 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:40:27.954717 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:40:27.967006 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:40:28.019592 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:40:28.031503 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:40:28.041969 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:40:28.042268 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:40:28.058134 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:40:28.073827 containerd[1462]: time="2025-01-30T13:40:28.073683484Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:40:28.088179 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:40:28.102397 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:40:28.117491 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:40:28.125226 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:40:28.137172 containerd[1462]: time="2025-01-30T13:40:28.137120620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:28.139037 containerd[1462]: time="2025-01-30T13:40:28.138981571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:40:28.139468 containerd[1462]: time="2025-01-30T13:40:28.139447424Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:40:28.139540 containerd[1462]: time="2025-01-30T13:40:28.139525962Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:40:28.140845 containerd[1462]: time="2025-01-30T13:40:28.139789967Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:40:28.140845 containerd[1462]: time="2025-01-30T13:40:28.139815344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:28.140845 containerd[1462]: time="2025-01-30T13:40:28.139885145Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:40:28.140845 containerd[1462]: time="2025-01-30T13:40:28.139900755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:28.140845 containerd[1462]: time="2025-01-30T13:40:28.140102162Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:40:28.140845 containerd[1462]: time="2025-01-30T13:40:28.140119805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:28.140845 containerd[1462]: time="2025-01-30T13:40:28.140137188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:40:28.140845 containerd[1462]: time="2025-01-30T13:40:28.140148700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:28.140845 containerd[1462]: time="2025-01-30T13:40:28.140231445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:28.140845 containerd[1462]: time="2025-01-30T13:40:28.140455625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:40:28.140845 containerd[1462]: time="2025-01-30T13:40:28.140553950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:40:28.141125 containerd[1462]: time="2025-01-30T13:40:28.140569419Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:40:28.141125 containerd[1462]: time="2025-01-30T13:40:28.140652484Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:40:28.141125 containerd[1462]: time="2025-01-30T13:40:28.140702458Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:40:28.158131 containerd[1462]: time="2025-01-30T13:40:28.158096671Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:40:28.158902 containerd[1462]: time="2025-01-30T13:40:28.158377698Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:40:28.158902 containerd[1462]: time="2025-01-30T13:40:28.158402485Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:40:28.158902 containerd[1462]: time="2025-01-30T13:40:28.158436919Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:40:28.158902 containerd[1462]: time="2025-01-30T13:40:28.158462778Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:40:28.158902 containerd[1462]: time="2025-01-30T13:40:28.158610775Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:40:28.158902 containerd[1462]: time="2025-01-30T13:40:28.158876894Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:40:28.159144 containerd[1462]: time="2025-01-30T13:40:28.159006968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:40:28.159144 containerd[1462]: time="2025-01-30T13:40:28.159028519Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:40:28.159144 containerd[1462]: time="2025-01-30T13:40:28.159046653Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:40:28.159144 containerd[1462]: time="2025-01-30T13:40:28.159062302Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:40:28.159144 containerd[1462]: time="2025-01-30T13:40:28.159078302Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:40:28.159144 containerd[1462]: time="2025-01-30T13:40:28.159093470Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:40:28.159144 containerd[1462]: time="2025-01-30T13:40:28.159110182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:40:28.159144 containerd[1462]: time="2025-01-30T13:40:28.159127244Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:40:28.159144 containerd[1462]: time="2025-01-30T13:40:28.159142402Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:40:28.159327 containerd[1462]: time="2025-01-30T13:40:28.159158052Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:40:28.159327 containerd[1462]: time="2025-01-30T13:40:28.159173521Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:40:28.159327 containerd[1462]: time="2025-01-30T13:40:28.159196864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159327 containerd[1462]: time="2025-01-30T13:40:28.159213916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159327 containerd[1462]: time="2025-01-30T13:40:28.159233553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159327 containerd[1462]: time="2025-01-30T13:40:28.159249884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159327 containerd[1462]: time="2025-01-30T13:40:28.159264571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159327 containerd[1462]: time="2025-01-30T13:40:28.159279960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159327 containerd[1462]: time="2025-01-30T13:40:28.159295079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159327 containerd[1462]: time="2025-01-30T13:40:28.159309666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159327 containerd[1462]: time="2025-01-30T13:40:28.159324514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159568 containerd[1462]: time="2025-01-30T13:40:28.159348148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159568 containerd[1462]: time="2025-01-30T13:40:28.159364719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159568 containerd[1462]: time="2025-01-30T13:40:28.159379287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159568 containerd[1462]: time="2025-01-30T13:40:28.159394425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159568 containerd[1462]: time="2025-01-30T13:40:28.159412389Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:40:28.159568 containerd[1462]: time="2025-01-30T13:40:28.159436915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159568 containerd[1462]: time="2025-01-30T13:40:28.159453826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159568 containerd[1462]: time="2025-01-30T13:40:28.159476058Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:40:28.159568 containerd[1462]: time="2025-01-30T13:40:28.159526683Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:40:28.159568 containerd[1462]: time="2025-01-30T13:40:28.159547572Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:40:28.159568 containerd[1462]: time="2025-01-30T13:40:28.159559054Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:40:28.159787 containerd[1462]: time="2025-01-30T13:40:28.159572379Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:40:28.159787 containerd[1462]: time="2025-01-30T13:40:28.159586094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.159787 containerd[1462]: time="2025-01-30T13:40:28.159603567Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:40:28.159787 containerd[1462]: time="2025-01-30T13:40:28.159615219Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:40:28.159787 containerd[1462]: time="2025-01-30T13:40:28.159626911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:40:28.160373 containerd[1462]: time="2025-01-30T13:40:28.159961679Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:40:28.160373 containerd[1462]: time="2025-01-30T13:40:28.160025619Z" level=info msg="Connect containerd service" Jan 30 13:40:28.160373 containerd[1462]: time="2025-01-30T13:40:28.160061486Z" level=info msg="using legacy CRI server" Jan 30 13:40:28.160373 containerd[1462]: time="2025-01-30T13:40:28.160070052Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:40:28.160373 containerd[1462]: time="2025-01-30T13:40:28.160205686Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:40:28.160878 containerd[1462]: time="2025-01-30T13:40:28.160791675Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:40:28.162077 containerd[1462]: time="2025-01-30T13:40:28.161404795Z" level=info msg="Start subscribing containerd event" Jan 30 13:40:28.162077 containerd[1462]: time="2025-01-30T13:40:28.161497259Z" level=info msg="Start recovering state" Jan 30 13:40:28.162077 containerd[1462]: time="2025-01-30T13:40:28.161428760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:40:28.162077 containerd[1462]: time="2025-01-30T13:40:28.161692605Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:40:28.163109 containerd[1462]: time="2025-01-30T13:40:28.162257374Z" level=info msg="Start event monitor" Jan 30 13:40:28.163109 containerd[1462]: time="2025-01-30T13:40:28.162802336Z" level=info msg="Start snapshots syncer" Jan 30 13:40:28.163109 containerd[1462]: time="2025-01-30T13:40:28.162820841Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:40:28.163109 containerd[1462]: time="2025-01-30T13:40:28.162838665Z" level=info msg="Start streaming server" Jan 30 13:40:28.163779 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:40:28.166006 containerd[1462]: time="2025-01-30T13:40:28.165985937Z" level=info msg="containerd successfully booted in 0.094103s" Jan 30 13:40:28.313935 tar[1455]: linux-amd64/LICENSE Jan 30 13:40:28.315183 tar[1455]: linux-amd64/README.md Jan 30 13:40:28.326267 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:40:28.630349 systemd-networkd[1367]: eth0: Gained IPv6LL Jan 30 13:40:28.634041 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:40:28.639354 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:40:28.658183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:40:28.670685 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:40:28.713904 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:40:30.001480 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:40:30.016916 systemd[1]: Started sshd@0-172.24.4.90:22-172.24.4.1:48058.service - OpenSSH per-connection server daemon (172.24.4.1:48058). Jan 30 13:40:30.661306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:40:30.661342 (kubelet)[1544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:40:31.004137 sshd[1536]: Accepted publickey for core from 172.24.4.1 port 48058 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:40:31.006893 sshd-session[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:31.017861 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:40:31.030064 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:40:31.043641 systemd-logind[1440]: New session 1 of user core. Jan 30 13:40:31.053757 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:40:31.068795 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:40:31.080665 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:40:31.201111 systemd[1552]: Queued start job for default target default.target. Jan 30 13:40:31.213263 systemd[1552]: Created slice app.slice - User Application Slice. Jan 30 13:40:31.213290 systemd[1552]: Reached target paths.target - Paths. Jan 30 13:40:31.213305 systemd[1552]: Reached target timers.target - Timers. Jan 30 13:40:31.217093 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:40:31.226109 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:40:31.226931 systemd[1552]: Reached target sockets.target - Sockets. Jan 30 13:40:31.227131 systemd[1552]: Reached target basic.target - Basic System. Jan 30 13:40:31.227248 systemd[1552]: Reached target default.target - Main User Target. Jan 30 13:40:31.227353 systemd[1552]: Startup finished in 140ms. Jan 30 13:40:31.227502 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:40:31.239287 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:40:31.585247 kubelet[1544]: E0130 13:40:31.585142 1544 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:40:31.590823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:40:31.592191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:40:31.593452 systemd[1]: kubelet.service: Consumed 1.857s CPU time. Jan 30 13:40:31.734673 systemd[1]: Started sshd@1-172.24.4.90:22-172.24.4.1:48064.service - OpenSSH per-connection server daemon (172.24.4.1:48064). Jan 30 13:40:33.134143 agetty[1517]: failed to open credentials directory Jan 30 13:40:33.134334 agetty[1516]: failed to open credentials directory Jan 30 13:40:33.152422 login[1516]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:40:33.169279 systemd-logind[1440]: New session 2 of user core. Jan 30 13:40:33.174719 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:40:33.177432 login[1517]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:40:33.189428 systemd-logind[1440]: New session 3 of user core. Jan 30 13:40:33.208351 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:40:33.836282 sshd[1566]: Accepted publickey for core from 172.24.4.1 port 48064 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:40:33.839079 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:33.850532 systemd-logind[1440]: New session 4 of user core. Jan 30 13:40:33.862497 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:40:34.478461 coreos-metadata[1426]: Jan 30 13:40:34.478 WARN failed to locate config-drive, using the metadata service API instead Jan 30 13:40:34.527443 coreos-metadata[1426]: Jan 30 13:40:34.527 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 13:40:34.573607 sshd[1594]: Connection closed by 172.24.4.1 port 48064 Jan 30 13:40:34.575268 sshd-session[1566]: pam_unix(sshd:session): session closed for user core Jan 30 13:40:34.587874 systemd[1]: sshd@1-172.24.4.90:22-172.24.4.1:48064.service: Deactivated successfully. Jan 30 13:40:34.591593 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:40:34.595909 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:40:34.603702 systemd[1]: Started sshd@2-172.24.4.90:22-172.24.4.1:44188.service - OpenSSH per-connection server daemon (172.24.4.1:44188). Jan 30 13:40:34.607661 systemd-logind[1440]: Removed session 4. Jan 30 13:40:34.712431 coreos-metadata[1426]: Jan 30 13:40:34.712 INFO Fetch successful Jan 30 13:40:34.712660 coreos-metadata[1426]: Jan 30 13:40:34.712 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 13:40:34.726936 coreos-metadata[1426]: Jan 30 13:40:34.726 INFO Fetch successful Jan 30 13:40:34.726936 coreos-metadata[1426]: Jan 30 13:40:34.726 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 13:40:34.741635 coreos-metadata[1426]: Jan 30 13:40:34.741 INFO Fetch successful Jan 30 13:40:34.741635 coreos-metadata[1426]: Jan 30 13:40:34.741 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 13:40:34.758225 coreos-metadata[1426]: Jan 30 13:40:34.758 INFO Fetch successful Jan 30 13:40:34.758225 coreos-metadata[1426]: Jan 30 13:40:34.758 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 13:40:34.772118 coreos-metadata[1426]: Jan 30 13:40:34.772 INFO Fetch successful Jan 30 13:40:34.772118 coreos-metadata[1426]: Jan 30 13:40:34.772 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 13:40:34.787550 coreos-metadata[1426]: Jan 30 13:40:34.787 INFO Fetch successful Jan 30 13:40:34.835293 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:40:34.837779 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:40:34.977355 coreos-metadata[1491]: Jan 30 13:40:34.977 WARN failed to locate config-drive, using the metadata service API instead Jan 30 13:40:35.021358 coreos-metadata[1491]: Jan 30 13:40:35.021 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 13:40:35.037404 coreos-metadata[1491]: Jan 30 13:40:35.037 INFO Fetch successful Jan 30 13:40:35.037404 coreos-metadata[1491]: Jan 30 13:40:35.037 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 13:40:35.051162 coreos-metadata[1491]: Jan 30 13:40:35.050 INFO Fetch successful Jan 30 13:40:35.056243 unknown[1491]: wrote ssh authorized keys file for user: core Jan 30 13:40:35.102003 update-ssh-keys[1610]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:40:35.103164 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:40:35.106215 systemd[1]: Finished sshkeys.service. Jan 30 13:40:35.112655 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:40:35.113228 systemd[1]: Startup finished in 1.241s (kernel) + 15.434s (initrd) + 11.217s (userspace) = 27.893s. Jan 30 13:40:35.825835 sshd[1601]: Accepted publickey for core from 172.24.4.1 port 44188 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:40:35.828587 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:35.838115 systemd-logind[1440]: New session 5 of user core. Jan 30 13:40:35.847242 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:40:36.417533 sshd[1614]: Connection closed by 172.24.4.1 port 44188 Jan 30 13:40:36.417348 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Jan 30 13:40:36.424408 systemd[1]: sshd@2-172.24.4.90:22-172.24.4.1:44188.service: Deactivated successfully. Jan 30 13:40:36.428190 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:40:36.432324 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:40:36.434634 systemd-logind[1440]: Removed session 5. Jan 30 13:40:41.776302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:40:41.785351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:40:42.131244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:40:42.142847 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:40:42.253926 kubelet[1626]: E0130 13:40:42.253745 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:40:42.257325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:40:42.257646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:40:46.438503 systemd[1]: Started sshd@3-172.24.4.90:22-172.24.4.1:60702.service - OpenSSH per-connection server daemon (172.24.4.1:60702). Jan 30 13:40:47.755099 sshd[1635]: Accepted publickey for core from 172.24.4.1 port 60702 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:40:47.757793 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:47.770422 systemd-logind[1440]: New session 6 of user core. Jan 30 13:40:47.779676 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:40:48.550003 sshd[1637]: Connection closed by 172.24.4.1 port 60702 Jan 30 13:40:48.550395 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Jan 30 13:40:48.561723 systemd[1]: sshd@3-172.24.4.90:22-172.24.4.1:60702.service: Deactivated successfully. Jan 30 13:40:48.565214 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:40:48.567235 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:40:48.588908 systemd[1]: Started sshd@4-172.24.4.90:22-172.24.4.1:60708.service - OpenSSH per-connection server daemon (172.24.4.1:60708). Jan 30 13:40:48.591360 systemd-logind[1440]: Removed session 6. Jan 30 13:40:50.092391 sshd[1642]: Accepted publickey for core from 172.24.4.1 port 60708 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:40:50.095592 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:50.109377 systemd-logind[1440]: New session 7 of user core. Jan 30 13:40:50.118426 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:40:50.832533 sshd[1644]: Connection closed by 172.24.4.1 port 60708 Jan 30 13:40:50.833704 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Jan 30 13:40:50.846786 systemd[1]: sshd@4-172.24.4.90:22-172.24.4.1:60708.service: Deactivated successfully. Jan 30 13:40:50.850422 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:40:50.854711 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:40:50.864631 systemd[1]: Started sshd@5-172.24.4.90:22-172.24.4.1:60724.service - OpenSSH per-connection server daemon (172.24.4.1:60724). Jan 30 13:40:50.867709 systemd-logind[1440]: Removed session 7. Jan 30 13:40:52.276218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:40:52.284341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:40:52.331101 sshd[1649]: Accepted publickey for core from 172.24.4.1 port 60724 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:40:52.332915 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:52.350180 systemd-logind[1440]: New session 8 of user core. Jan 30 13:40:52.355246 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:40:52.635259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:40:52.646545 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:40:52.752186 kubelet[1660]: E0130 13:40:52.752031 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:40:52.756704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:40:52.756865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:40:52.979451 sshd[1654]: Connection closed by 172.24.4.1 port 60724 Jan 30 13:40:52.981265 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Jan 30 13:40:52.991847 systemd[1]: sshd@5-172.24.4.90:22-172.24.4.1:60724.service: Deactivated successfully. Jan 30 13:40:52.995112 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:40:52.996903 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:40:53.004528 systemd[1]: Started sshd@6-172.24.4.90:22-172.24.4.1:60740.service - OpenSSH per-connection server daemon (172.24.4.1:60740). Jan 30 13:40:53.007169 systemd-logind[1440]: Removed session 8. Jan 30 13:40:54.000991 sshd[1672]: Accepted publickey for core from 172.24.4.1 port 60740 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:40:54.003931 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:54.020092 systemd-logind[1440]: New session 9 of user core. Jan 30 13:40:54.039402 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:40:54.453560 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:40:54.454295 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:40:54.477703 sudo[1675]: pam_unix(sudo:session): session closed for user root Jan 30 13:40:54.651014 sshd[1674]: Connection closed by 172.24.4.1 port 60740 Jan 30 13:40:54.651469 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Jan 30 13:40:54.672772 systemd[1]: sshd@6-172.24.4.90:22-172.24.4.1:60740.service: Deactivated successfully. Jan 30 13:40:54.677254 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:40:54.680286 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:40:54.687511 systemd[1]: Started sshd@7-172.24.4.90:22-172.24.4.1:50948.service - OpenSSH per-connection server daemon (172.24.4.1:50948). Jan 30 13:40:54.690563 systemd-logind[1440]: Removed session 9. Jan 30 13:40:55.706284 sshd[1680]: Accepted publickey for core from 172.24.4.1 port 50948 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:40:55.709248 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:55.721480 systemd-logind[1440]: New session 10 of user core. Jan 30 13:40:55.729241 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:40:56.190796 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:40:56.191537 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:40:56.201263 sudo[1684]: pam_unix(sudo:session): session closed for user root Jan 30 13:40:56.215182 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:40:56.215927 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:40:56.249705 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:40:56.309126 augenrules[1706]: No rules Jan 30 13:40:56.310420 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:40:56.310773 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:40:56.314010 sudo[1683]: pam_unix(sudo:session): session closed for user root Jan 30 13:40:56.457314 sshd[1682]: Connection closed by 172.24.4.1 port 50948 Jan 30 13:40:56.459350 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Jan 30 13:40:56.471147 systemd[1]: sshd@7-172.24.4.90:22-172.24.4.1:50948.service: Deactivated successfully. Jan 30 13:40:56.474782 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:40:56.479266 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:40:56.487503 systemd[1]: Started sshd@8-172.24.4.90:22-172.24.4.1:50952.service - OpenSSH per-connection server daemon (172.24.4.1:50952). Jan 30 13:40:56.490729 systemd-logind[1440]: Removed session 10. Jan 30 13:40:57.647473 sshd[1714]: Accepted publickey for core from 172.24.4.1 port 50952 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:40:57.649620 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:40:57.656615 systemd-logind[1440]: New session 11 of user core. Jan 30 13:40:57.667310 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:40:58.125736 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:40:58.126523 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:40:58.861387 (dockerd)[1736]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:40:58.861479 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:40:59.377293 dockerd[1736]: time="2025-01-30T13:40:59.377204269Z" level=info msg="Starting up" Jan 30 13:40:59.536540 systemd[1]: var-lib-docker-metacopy\x2dcheck998049717-merged.mount: Deactivated successfully. Jan 30 13:40:59.586036 dockerd[1736]: time="2025-01-30T13:40:59.585869561Z" level=info msg="Loading containers: start." Jan 30 13:40:59.738997 kernel: Initializing XFRM netlink socket Jan 30 13:40:59.831080 systemd-networkd[1367]: docker0: Link UP Jan 30 13:40:59.875306 dockerd[1736]: time="2025-01-30T13:40:59.875131862Z" level=info msg="Loading containers: done." Jan 30 13:40:59.902273 dockerd[1736]: time="2025-01-30T13:40:59.901807172Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:40:59.902273 dockerd[1736]: time="2025-01-30T13:40:59.901966942Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:40:59.902273 dockerd[1736]: time="2025-01-30T13:40:59.902077880Z" level=info msg="Daemon has completed initialization" Jan 30 13:40:59.901888 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck447517079-merged.mount: Deactivated successfully. Jan 30 13:40:59.955054 dockerd[1736]: time="2025-01-30T13:40:59.954737593Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:40:59.955596 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:41:01.609068 containerd[1462]: time="2025-01-30T13:41:01.607571870Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:41:02.365353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3763388574.mount: Deactivated successfully. Jan 30 13:41:02.776042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:41:02.786128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:41:02.924219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:41:02.924844 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:41:03.032882 kubelet[1975]: E0130 13:41:03.032355 1975 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:41:03.036101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:41:03.036346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:41:04.220077 containerd[1462]: time="2025-01-30T13:41:04.219425022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:04.224368 containerd[1462]: time="2025-01-30T13:41:04.222824730Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976729" Jan 30 13:41:04.225287 containerd[1462]: time="2025-01-30T13:41:04.225222048Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:04.232686 containerd[1462]: time="2025-01-30T13:41:04.232631380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:04.234803 containerd[1462]: time="2025-01-30T13:41:04.234497591Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.626853526s" Jan 30 13:41:04.234803 containerd[1462]: time="2025-01-30T13:41:04.234537546Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 30 13:41:04.244258 containerd[1462]: time="2025-01-30T13:41:04.244166961Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:41:06.317362 containerd[1462]: time="2025-01-30T13:41:06.317293481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:06.318722 containerd[1462]: time="2025-01-30T13:41:06.318674542Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701151" Jan 30 13:41:06.319889 containerd[1462]: time="2025-01-30T13:41:06.319830931Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:06.324658 containerd[1462]: time="2025-01-30T13:41:06.324574130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:06.326584 containerd[1462]: time="2025-01-30T13:41:06.325932389Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 2.081711296s" Jan 30 13:41:06.326584 containerd[1462]: time="2025-01-30T13:41:06.325988514Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 30 13:41:06.330801 containerd[1462]: time="2025-01-30T13:41:06.330692919Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:41:08.168005 containerd[1462]: time="2025-01-30T13:41:08.167690297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:08.169310 containerd[1462]: time="2025-01-30T13:41:08.169276814Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652061" Jan 30 13:41:08.170446 containerd[1462]: time="2025-01-30T13:41:08.170422173Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:08.174497 containerd[1462]: time="2025-01-30T13:41:08.174442755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:08.176201 containerd[1462]: time="2025-01-30T13:41:08.176049719Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.845254818s" Jan 30 13:41:08.176201 containerd[1462]: time="2025-01-30T13:41:08.176081369Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 30 13:41:08.176791 containerd[1462]: time="2025-01-30T13:41:08.176716801Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:41:09.554612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556275336.mount: Deactivated successfully. Jan 30 13:41:10.075986 containerd[1462]: time="2025-01-30T13:41:10.075879154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:10.077248 containerd[1462]: time="2025-01-30T13:41:10.077186286Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231136" Jan 30 13:41:10.078577 containerd[1462]: time="2025-01-30T13:41:10.078529376Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:10.081184 containerd[1462]: time="2025-01-30T13:41:10.081158828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:10.081913 containerd[1462]: time="2025-01-30T13:41:10.081836780Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.905090773s" Jan 30 13:41:10.083968 containerd[1462]: time="2025-01-30T13:41:10.081881744Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 13:41:10.086115 containerd[1462]: time="2025-01-30T13:41:10.086078627Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:41:10.835820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734596793.mount: Deactivated successfully. Jan 30 13:41:12.228980 containerd[1462]: time="2025-01-30T13:41:12.228751211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:12.232579 containerd[1462]: time="2025-01-30T13:41:12.232453838Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 30 13:41:12.234288 containerd[1462]: time="2025-01-30T13:41:12.234144239Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:12.239554 containerd[1462]: time="2025-01-30T13:41:12.239296354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:12.240826 containerd[1462]: time="2025-01-30T13:41:12.240620958Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.154400865s" Jan 30 13:41:12.240826 containerd[1462]: time="2025-01-30T13:41:12.240666834Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:41:12.241412 containerd[1462]: time="2025-01-30T13:41:12.241286546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:41:12.825677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2468051043.mount: Deactivated successfully. Jan 30 13:41:12.837875 containerd[1462]: time="2025-01-30T13:41:12.837699673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:12.840010 containerd[1462]: time="2025-01-30T13:41:12.839848854Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 30 13:41:12.841436 containerd[1462]: time="2025-01-30T13:41:12.841334661Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:12.848755 containerd[1462]: time="2025-01-30T13:41:12.848605370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:12.851810 containerd[1462]: time="2025-01-30T13:41:12.850352137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 609.010487ms" Jan 30 13:41:12.851810 containerd[1462]: time="2025-01-30T13:41:12.850425415Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:41:12.853279 containerd[1462]: time="2025-01-30T13:41:12.853146780Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:41:13.234929 update_engine[1441]: I20250130 13:41:13.234653 1441 update_attempter.cc:509] Updating boot flags... Jan 30 13:41:13.253466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 13:41:13.263761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:41:13.327465 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2068) Jan 30 13:41:13.564555 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2072) Jan 30 13:41:13.773825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:41:13.781540 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:41:13.839900 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2072) Jan 30 13:41:13.901417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount544011405.mount: Deactivated successfully. Jan 30 13:41:13.920473 kubelet[2080]: E0130 13:41:13.920411 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:41:13.922946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:41:13.923129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:41:16.609260 containerd[1462]: time="2025-01-30T13:41:16.608544257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:16.616019 containerd[1462]: time="2025-01-30T13:41:16.615891960Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Jan 30 13:41:16.730489 containerd[1462]: time="2025-01-30T13:41:16.730157411Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:16.757239 containerd[1462]: time="2025-01-30T13:41:16.757132071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:16.761751 containerd[1462]: time="2025-01-30T13:41:16.761463025Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.908011644s" Jan 30 13:41:16.761751 containerd[1462]: time="2025-01-30T13:41:16.761536754Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 30 13:41:21.130847 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:41:21.153592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:41:21.224585 systemd[1]: Reloading requested from client PID 2169 ('systemctl') (unit session-11.scope)... Jan 30 13:41:21.224602 systemd[1]: Reloading... Jan 30 13:41:21.318997 zram_generator::config[2208]: No configuration found. Jan 30 13:41:21.613469 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:41:21.696579 systemd[1]: Reloading finished in 471 ms. Jan 30 13:41:21.740111 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:41:21.740187 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:41:21.740453 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:41:21.746018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:41:21.834883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:41:21.845370 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:41:21.887933 kubelet[2273]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:41:21.887933 kubelet[2273]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:41:21.887933 kubelet[2273]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:41:22.086731 kubelet[2273]: I0130 13:41:22.086569 2273 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:41:22.423492 kubelet[2273]: I0130 13:41:22.423425 2273 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:41:22.423492 kubelet[2273]: I0130 13:41:22.423459 2273 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:41:22.423762 kubelet[2273]: I0130 13:41:22.423696 2273 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:41:22.460008 kubelet[2273]: I0130 13:41:22.458903 2273 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:41:22.460601 kubelet[2273]: E0130 13:41:22.460549 2273 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.90:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:41:22.475212 kubelet[2273]: E0130 13:41:22.475158 2273 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:41:22.475562 kubelet[2273]: I0130 13:41:22.475534 2273 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:41:22.486641 kubelet[2273]: I0130 13:41:22.486605 2273 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:41:22.492697 kubelet[2273]: I0130 13:41:22.492663 2273 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:41:22.493288 kubelet[2273]: I0130 13:41:22.493225 2273 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:41:22.493881 kubelet[2273]: I0130 13:41:22.493430 2273 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-f-d1cd2b53be.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:41:22.494239 kubelet[2273]: I0130 13:41:22.494199 2273 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:41:22.494372 kubelet[2273]: I0130 13:41:22.494355 2273 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:41:22.494928 kubelet[2273]: I0130 13:41:22.494646 2273 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:41:22.500330 kubelet[2273]: I0130 13:41:22.499902 2273 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:41:22.500330 kubelet[2273]: I0130 13:41:22.499980 2273 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:41:22.500330 kubelet[2273]: I0130 13:41:22.500046 2273 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:41:22.500330 kubelet[2273]: I0130 13:41:22.500082 2273 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:41:22.523988 kubelet[2273]: W0130 13:41:22.523327 2273 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-f-d1cd2b53be.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.90:6443: connect: connection refused Jan 30 13:41:22.523988 kubelet[2273]: E0130 13:41:22.523513 2273 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-f-d1cd2b53be.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.90:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:41:22.526076 kubelet[2273]: I0130 13:41:22.525133 2273 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:41:22.541506 kubelet[2273]: I0130 13:41:22.540636 2273 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:41:22.542124 kubelet[2273]: W0130 13:41:22.541834 2273 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.90:6443: connect: connection refused Jan 30 13:41:22.542124 kubelet[2273]: E0130 13:41:22.542004 2273 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.90:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:41:22.551420 kubelet[2273]: W0130 13:41:22.551332 2273 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:41:22.553223 kubelet[2273]: I0130 13:41:22.552675 2273 server.go:1269] "Started kubelet" Jan 30 13:41:22.555417 kubelet[2273]: I0130 13:41:22.555360 2273 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:41:22.567871 kubelet[2273]: I0130 13:41:22.567131 2273 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:41:22.569325 kubelet[2273]: I0130 13:41:22.569285 2273 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:41:22.571289 kubelet[2273]: I0130 13:41:22.571173 2273 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:41:22.571614 kubelet[2273]: I0130 13:41:22.571578 2273 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:41:22.572097 kubelet[2273]: I0130 13:41:22.572058 2273 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:41:22.575296 kubelet[2273]: I0130 13:41:22.575246 2273 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:41:22.575671 kubelet[2273]: E0130 13:41:22.575611 2273 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" not found" Jan 30 13:41:22.591593 kubelet[2273]: I0130 13:41:22.591345 2273 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:41:22.591593 kubelet[2273]: I0130 13:41:22.591470 2273 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:41:22.616221 kubelet[2273]: W0130 13:41:22.616064 2273 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.90:6443: connect: connection refused Jan 30 13:41:22.617025 kubelet[2273]: E0130 13:41:22.616528 2273 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.90:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:41:22.617025 kubelet[2273]: E0130 13:41:22.616690 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-f-d1cd2b53be.novalocal?timeout=10s\": dial tcp 172.24.4.90:6443: connect: connection refused" interval="200ms" Jan 30 13:41:22.618738 kubelet[2273]: I0130 13:41:22.618678 2273 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:41:22.619179 kubelet[2273]: I0130 13:41:22.619074 2273 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:41:22.622685 kubelet[2273]: I0130 13:41:22.622551 2273 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:41:22.679157 kubelet[2273]: E0130 13:41:22.678868 2273 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" not found" Jan 30 13:41:22.697135 kubelet[2273]: I0130 13:41:22.697066 2273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:41:22.702218 kubelet[2273]: I0130 13:41:22.702091 2273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:41:22.702411 kubelet[2273]: I0130 13:41:22.702254 2273 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:41:22.702411 kubelet[2273]: I0130 13:41:22.702292 2273 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:41:22.702539 kubelet[2273]: E0130 13:41:22.702398 2273 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:41:22.714415 kubelet[2273]: W0130 13:41:22.714147 2273 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.90:6443: connect: connection refused Jan 30 13:41:22.714415 kubelet[2273]: E0130 13:41:22.714267 2273 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.90:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:41:22.744579 kubelet[2273]: I0130 13:41:22.744512 2273 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:41:22.744579 kubelet[2273]: I0130 13:41:22.744534 2273 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:41:22.744579 kubelet[2273]: I0130 13:41:22.744548 2273 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:41:22.779345 kubelet[2273]: E0130 13:41:22.779214 2273 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" not found" Jan 30 13:41:22.807919 kubelet[2273]: E0130 13:41:22.802898 2273 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:41:22.818007 kubelet[2273]: E0130 13:41:22.817853 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-f-d1cd2b53be.novalocal?timeout=10s\": dial tcp 172.24.4.90:6443: connect: connection refused" interval="400ms" Jan 30 13:41:22.844999 kubelet[2273]: E0130 13:41:22.761143 2273 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.90:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.90:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-f-d1cd2b53be.novalocal.181f7c25b354f484 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-f-d1cd2b53be.novalocal,UID:ci-4186-1-0-f-d1cd2b53be.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-f-d1cd2b53be.novalocal,},FirstTimestamp:2025-01-30 13:41:22.55262426 +0000 UTC m=+0.704073837,LastTimestamp:2025-01-30 13:41:22.55262426 +0000 UTC m=+0.704073837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-f-d1cd2b53be.novalocal,}" Jan 30 13:41:22.880322 kubelet[2273]: E0130 13:41:22.880260 2273 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" not found" Jan 30 13:41:22.981396 kubelet[2273]: E0130 13:41:22.981250 2273 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" not found" Jan 30 13:41:23.003736 kubelet[2273]: E0130 13:41:23.003651 2273 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:41:23.042580 kubelet[2273]: I0130 13:41:23.042318 2273 policy_none.go:49] "None policy: Start" Jan 30 13:41:23.044238 kubelet[2273]: I0130 13:41:23.044091 2273 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:41:23.044238 kubelet[2273]: I0130 13:41:23.044149 2273 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:41:23.081525 kubelet[2273]: E0130 13:41:23.081447 2273 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" not found" Jan 30 13:41:23.182453 kubelet[2273]: E0130 13:41:23.182358 2273 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" not found" Jan 30 13:41:23.218944 kubelet[2273]: E0130 13:41:23.218825 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-f-d1cd2b53be.novalocal?timeout=10s\": dial tcp 172.24.4.90:6443: connect: connection refused" interval="800ms" Jan 30 13:41:23.282915 kubelet[2273]: E0130 13:41:23.282832 2273 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" not found" Jan 30 13:41:23.384086 kubelet[2273]: E0130 13:41:23.384019 2273 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" not found" Jan 30 13:41:23.404882 kubelet[2273]: E0130 13:41:23.404834 2273 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:41:23.423250 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:41:23.440677 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:41:23.448217 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:41:23.463055 kubelet[2273]: I0130 13:41:23.463002 2273 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:41:23.463515 kubelet[2273]: I0130 13:41:23.463326 2273 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:41:23.463515 kubelet[2273]: I0130 13:41:23.463363 2273 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:41:23.465037 kubelet[2273]: I0130 13:41:23.464297 2273 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:41:23.468684 kubelet[2273]: E0130 13:41:23.468469 2273 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" not found" Jan 30 13:41:23.564881 kubelet[2273]: W0130 13:41:23.564618 2273 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.90:6443: connect: connection refused Jan 30 13:41:23.564881 kubelet[2273]: E0130 13:41:23.564733 2273 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.90:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:41:23.569195 kubelet[2273]: I0130 13:41:23.568600 2273 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:23.569301 kubelet[2273]: E0130 13:41:23.569250 2273 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.90:6443/api/v1/nodes\": dial tcp 172.24.4.90:6443: connect: connection refused" node="ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:23.773326 kubelet[2273]: I0130 13:41:23.773289 2273 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:23.774203 kubelet[2273]: E0130 13:41:23.774118 2273 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.90:6443/api/v1/nodes\": dial tcp 172.24.4.90:6443: connect: connection refused" node="ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:23.790223 kubelet[2273]: W0130 13:41:23.790098 2273 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-f-d1cd2b53be.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.90:6443: connect: connection refused Jan 30 13:41:23.790371 kubelet[2273]: E0130 13:41:23.790226 2273 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-f-d1cd2b53be.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.90:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:41:23.878435 kubelet[2273]: W0130 13:41:23.878274 2273 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.90:6443: connect: connection refused Jan 30 13:41:23.878435 kubelet[2273]: E0130 13:41:23.878357 2273 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.90:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:41:23.979597 kubelet[2273]: W0130 13:41:23.979418 2273 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.90:6443: connect: connection refused Jan 30 13:41:23.979597 kubelet[2273]: E0130 13:41:23.979536 2273 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.90:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:41:24.020320 kubelet[2273]: E0130 13:41:24.020245 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-f-d1cd2b53be.novalocal?timeout=10s\": dial tcp 172.24.4.90:6443: connect: connection refused" interval="1.6s" Jan 30 13:41:24.178158 kubelet[2273]: I0130 13:41:24.177396 2273 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:24.178158 kubelet[2273]: E0130 13:41:24.178001 2273 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.90:6443/api/v1/nodes\": dial tcp 172.24.4.90:6443: connect: connection refused" node="ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:24.227442 systemd[1]: Created slice kubepods-burstable-podc7ea6b1c59fd62cfed53c023e042a91d.slice - libcontainer container kubepods-burstable-podc7ea6b1c59fd62cfed53c023e042a91d.slice. Jan 30 13:41:24.256452 systemd[1]: Created slice kubepods-burstable-pod7118de20152d5b9b77a7c166b2ed7fd1.slice - libcontainer container kubepods-burstable-pod7118de20152d5b9b77a7c166b2ed7fd1.slice. Jan 30 13:41:24.279232 systemd[1]: Created slice kubepods-burstable-pod1d4b30f7dcb9ba5ee362124520d8e16e.slice - libcontainer container kubepods-burstable-pod1d4b30f7dcb9ba5ee362124520d8e16e.slice. Jan 30 13:41:24.302677 kubelet[2273]: I0130 13:41:24.302572 2273 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7118de20152d5b9b77a7c166b2ed7fd1-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"7118de20152d5b9b77a7c166b2ed7fd1\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:24.303065 kubelet[2273]: I0130 13:41:24.302729 2273 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7118de20152d5b9b77a7c166b2ed7fd1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"7118de20152d5b9b77a7c166b2ed7fd1\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:24.303065 kubelet[2273]: I0130 13:41:24.302860 2273 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7ea6b1c59fd62cfed53c023e042a91d-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"c7ea6b1c59fd62cfed53c023e042a91d\") " pod="kube-system/kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:24.303065 kubelet[2273]: I0130 13:41:24.302910 2273 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7ea6b1c59fd62cfed53c023e042a91d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"c7ea6b1c59fd62cfed53c023e042a91d\") " pod="kube-system/kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:24.303065 kubelet[2273]: I0130 13:41:24.302994 2273 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7ea6b1c59fd62cfed53c023e042a91d-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"c7ea6b1c59fd62cfed53c023e042a91d\") " pod="kube-system/kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:24.303345 kubelet[2273]: I0130 13:41:24.303040 2273 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7118de20152d5b9b77a7c166b2ed7fd1-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"7118de20152d5b9b77a7c166b2ed7fd1\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:24.303345 kubelet[2273]: I0130 13:41:24.303085 2273 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7118de20152d5b9b77a7c166b2ed7fd1-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"7118de20152d5b9b77a7c166b2ed7fd1\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:24.303345 kubelet[2273]: I0130 13:41:24.303133 2273 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7118de20152d5b9b77a7c166b2ed7fd1-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"7118de20152d5b9b77a7c166b2ed7fd1\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:24.303345 kubelet[2273]: I0130 13:41:24.303176 2273 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d4b30f7dcb9ba5ee362124520d8e16e-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"1d4b30f7dcb9ba5ee362124520d8e16e\") " pod="kube-system/kube-scheduler-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:24.527467 kubelet[2273]: E0130 13:41:24.527379 2273 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.90:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:41:24.549932 containerd[1462]: time="2025-01-30T13:41:24.549819192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal,Uid:c7ea6b1c59fd62cfed53c023e042a91d,Namespace:kube-system,Attempt:0,}" Jan 30 13:41:24.576304 containerd[1462]: time="2025-01-30T13:41:24.575864805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal,Uid:7118de20152d5b9b77a7c166b2ed7fd1,Namespace:kube-system,Attempt:0,}" Jan 30 13:41:24.584586 containerd[1462]: time="2025-01-30T13:41:24.584516714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-f-d1cd2b53be.novalocal,Uid:1d4b30f7dcb9ba5ee362124520d8e16e,Namespace:kube-system,Attempt:0,}" Jan 30 13:41:24.981165 kubelet[2273]: I0130 13:41:24.980945 2273 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:24.982404 kubelet[2273]: E0130 13:41:24.982340 2273 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.90:6443/api/v1/nodes\": dial tcp 172.24.4.90:6443: connect: connection refused" node="ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:25.162357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3133754725.mount: Deactivated successfully. Jan 30 13:41:25.173598 containerd[1462]: time="2025-01-30T13:41:25.173413163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:41:25.179022 containerd[1462]: time="2025-01-30T13:41:25.178826116Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 13:41:25.182714 containerd[1462]: time="2025-01-30T13:41:25.182485941Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:41:25.185841 containerd[1462]: time="2025-01-30T13:41:25.185750604Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:41:25.187649 containerd[1462]: time="2025-01-30T13:41:25.187518740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:41:25.189426 containerd[1462]: time="2025-01-30T13:41:25.189269615Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:41:25.191695 containerd[1462]: time="2025-01-30T13:41:25.191591620Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:41:25.193198 containerd[1462]: time="2025-01-30T13:41:25.193008838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:41:25.198923 containerd[1462]: time="2025-01-30T13:41:25.198078136Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 621.978951ms" Jan 30 13:41:25.199882 containerd[1462]: time="2025-01-30T13:41:25.199828850Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 649.783263ms" Jan 30 13:41:25.217414 containerd[1462]: time="2025-01-30T13:41:25.217349843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 632.669953ms" Jan 30 13:41:25.387811 containerd[1462]: time="2025-01-30T13:41:25.387482716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:25.387811 containerd[1462]: time="2025-01-30T13:41:25.387536066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:25.387811 containerd[1462]: time="2025-01-30T13:41:25.387549682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:25.387811 containerd[1462]: time="2025-01-30T13:41:25.387619313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:25.389376 containerd[1462]: time="2025-01-30T13:41:25.389247868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:25.389554 containerd[1462]: time="2025-01-30T13:41:25.389361231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:25.390781 containerd[1462]: time="2025-01-30T13:41:25.390228687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:25.390781 containerd[1462]: time="2025-01-30T13:41:25.390268632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:25.390781 containerd[1462]: time="2025-01-30T13:41:25.390287388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:25.390781 containerd[1462]: time="2025-01-30T13:41:25.390367007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:25.391165 containerd[1462]: time="2025-01-30T13:41:25.390717584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:25.391165 containerd[1462]: time="2025-01-30T13:41:25.391039638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:25.417105 systemd[1]: Started cri-containerd-7913c721da150e515ae7399be3dec006d56d28b3f46d591c22c2b4134c518fce.scope - libcontainer container 7913c721da150e515ae7399be3dec006d56d28b3f46d591c22c2b4134c518fce. Jan 30 13:41:25.418237 systemd[1]: Started cri-containerd-cc83a53022c49bc204d2f847f6a36d81b179b2fb120ee0b8bdada1666adf69fa.scope - libcontainer container cc83a53022c49bc204d2f847f6a36d81b179b2fb120ee0b8bdada1666adf69fa. Jan 30 13:41:25.423342 systemd[1]: Started cri-containerd-f1aae79f9a5d8d42930d1112844fc2a8d5d58d696415334bee411cf8363ba873.scope - libcontainer container f1aae79f9a5d8d42930d1112844fc2a8d5d58d696415334bee411cf8363ba873. Jan 30 13:41:25.488484 containerd[1462]: time="2025-01-30T13:41:25.488445028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-f-d1cd2b53be.novalocal,Uid:1d4b30f7dcb9ba5ee362124520d8e16e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7913c721da150e515ae7399be3dec006d56d28b3f46d591c22c2b4134c518fce\"" Jan 30 13:41:25.493381 containerd[1462]: time="2025-01-30T13:41:25.493352333Z" level=info msg="CreateContainer within sandbox \"7913c721da150e515ae7399be3dec006d56d28b3f46d591c22c2b4134c518fce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:41:25.497569 containerd[1462]: time="2025-01-30T13:41:25.497539126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal,Uid:7118de20152d5b9b77a7c166b2ed7fd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc83a53022c49bc204d2f847f6a36d81b179b2fb120ee0b8bdada1666adf69fa\"" Jan 30 13:41:25.498784 containerd[1462]: time="2025-01-30T13:41:25.498748785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal,Uid:c7ea6b1c59fd62cfed53c023e042a91d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1aae79f9a5d8d42930d1112844fc2a8d5d58d696415334bee411cf8363ba873\"" Jan 30 13:41:25.502652 containerd[1462]: time="2025-01-30T13:41:25.502620066Z" level=info msg="CreateContainer within sandbox \"f1aae79f9a5d8d42930d1112844fc2a8d5d58d696415334bee411cf8363ba873\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:41:25.502974 containerd[1462]: time="2025-01-30T13:41:25.502905522Z" level=info msg="CreateContainer within sandbox \"cc83a53022c49bc204d2f847f6a36d81b179b2fb120ee0b8bdada1666adf69fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:41:25.515471 kubelet[2273]: W0130 13:41:25.515424 2273 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.90:6443: connect: connection refused Jan 30 13:41:25.515893 kubelet[2273]: E0130 13:41:25.515831 2273 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.90:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:41:25.524609 containerd[1462]: time="2025-01-30T13:41:25.524565418Z" level=info msg="CreateContainer within sandbox \"7913c721da150e515ae7399be3dec006d56d28b3f46d591c22c2b4134c518fce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"69dbe78040ea982810522fbc5c23d365043e766330e7f9f7c6ebf7255f1686e8\"" Jan 30 13:41:25.525383 containerd[1462]: time="2025-01-30T13:41:25.525282444Z" level=info msg="StartContainer for \"69dbe78040ea982810522fbc5c23d365043e766330e7f9f7c6ebf7255f1686e8\"" Jan 30 13:41:25.541598 containerd[1462]: time="2025-01-30T13:41:25.541480365Z" level=info msg="CreateContainer within sandbox \"f1aae79f9a5d8d42930d1112844fc2a8d5d58d696415334bee411cf8363ba873\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8a187cda0fbf81ac29aee2eeadfa07073dafb6ea9fa68be8b9b15df04c0d0cd4\"" Jan 30 13:41:25.542387 containerd[1462]: time="2025-01-30T13:41:25.542261430Z" level=info msg="StartContainer for \"8a187cda0fbf81ac29aee2eeadfa07073dafb6ea9fa68be8b9b15df04c0d0cd4\"" Jan 30 13:41:25.550145 containerd[1462]: time="2025-01-30T13:41:25.549143930Z" level=info msg="CreateContainer within sandbox \"cc83a53022c49bc204d2f847f6a36d81b179b2fb120ee0b8bdada1666adf69fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4feb543c6a0c2e7b06202ecfc14efb26c7f48894387c44813f5dde6389b31675\"" Jan 30 13:41:25.554370 containerd[1462]: time="2025-01-30T13:41:25.551182794Z" level=info msg="StartContainer for \"4feb543c6a0c2e7b06202ecfc14efb26c7f48894387c44813f5dde6389b31675\"" Jan 30 13:41:25.554451 systemd[1]: Started cri-containerd-69dbe78040ea982810522fbc5c23d365043e766330e7f9f7c6ebf7255f1686e8.scope - libcontainer container 69dbe78040ea982810522fbc5c23d365043e766330e7f9f7c6ebf7255f1686e8. Jan 30 13:41:25.589146 systemd[1]: Started cri-containerd-8a187cda0fbf81ac29aee2eeadfa07073dafb6ea9fa68be8b9b15df04c0d0cd4.scope - libcontainer container 8a187cda0fbf81ac29aee2eeadfa07073dafb6ea9fa68be8b9b15df04c0d0cd4. Jan 30 13:41:25.602538 systemd[1]: Started cri-containerd-4feb543c6a0c2e7b06202ecfc14efb26c7f48894387c44813f5dde6389b31675.scope - libcontainer container 4feb543c6a0c2e7b06202ecfc14efb26c7f48894387c44813f5dde6389b31675. Jan 30 13:41:25.619253 containerd[1462]: time="2025-01-30T13:41:25.619127847Z" level=info msg="StartContainer for \"69dbe78040ea982810522fbc5c23d365043e766330e7f9f7c6ebf7255f1686e8\" returns successfully" Jan 30 13:41:25.621122 kubelet[2273]: E0130 13:41:25.620996 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-f-d1cd2b53be.novalocal?timeout=10s\": dial tcp 172.24.4.90:6443: connect: connection refused" interval="3.2s" Jan 30 13:41:25.675933 containerd[1462]: time="2025-01-30T13:41:25.675833908Z" level=info msg="StartContainer for \"8a187cda0fbf81ac29aee2eeadfa07073dafb6ea9fa68be8b9b15df04c0d0cd4\" returns successfully" Jan 30 13:41:25.677435 containerd[1462]: time="2025-01-30T13:41:25.675995301Z" level=info msg="StartContainer for \"4feb543c6a0c2e7b06202ecfc14efb26c7f48894387c44813f5dde6389b31675\" returns successfully" Jan 30 13:41:26.586034 kubelet[2273]: I0130 13:41:26.585324 2273 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:27.246108 kubelet[2273]: I0130 13:41:27.245975 2273 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:27.246108 kubelet[2273]: E0130 13:41:27.246008 2273 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4186-1-0-f-d1cd2b53be.novalocal\": node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" not found" Jan 30 13:41:27.544889 kubelet[2273]: I0130 13:41:27.543995 2273 apiserver.go:52] "Watching apiserver" Jan 30 13:41:27.591857 kubelet[2273]: I0130 13:41:27.591837 2273 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:41:27.763348 kubelet[2273]: E0130 13:41:27.762857 2273 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:29.672085 systemd[1]: Reloading requested from client PID 2543 ('systemctl') (unit session-11.scope)... Jan 30 13:41:29.672118 systemd[1]: Reloading... Jan 30 13:41:29.818014 zram_generator::config[2585]: No configuration found. Jan 30 13:41:29.978190 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:41:30.078677 systemd[1]: Reloading finished in 405 ms. Jan 30 13:41:30.117178 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:41:30.129283 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:41:30.129475 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:41:30.129529 systemd[1]: kubelet.service: Consumed 1.038s CPU time, 117.0M memory peak, 0B memory swap peak. Jan 30 13:41:30.135277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:41:30.355219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:41:30.366268 (kubelet)[2645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:41:30.427973 kubelet[2645]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:41:30.427973 kubelet[2645]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:41:30.427973 kubelet[2645]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:41:30.427973 kubelet[2645]: I0130 13:41:30.427504 2645 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:41:30.434489 kubelet[2645]: I0130 13:41:30.434455 2645 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:41:30.434489 kubelet[2645]: I0130 13:41:30.434483 2645 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:41:30.434777 kubelet[2645]: I0130 13:41:30.434751 2645 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:41:30.437023 kubelet[2645]: I0130 13:41:30.436997 2645 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:41:30.439132 kubelet[2645]: I0130 13:41:30.439002 2645 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:41:30.442753 kubelet[2645]: E0130 13:41:30.442683 2645 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:41:30.442753 kubelet[2645]: I0130 13:41:30.442713 2645 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:41:30.446993 kubelet[2645]: I0130 13:41:30.446086 2645 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:41:30.446993 kubelet[2645]: I0130 13:41:30.446199 2645 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:41:30.446993 kubelet[2645]: I0130 13:41:30.446328 2645 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:41:30.447118 kubelet[2645]: I0130 13:41:30.446355 2645 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-f-d1cd2b53be.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:41:30.447118 kubelet[2645]: I0130 13:41:30.446528 2645 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:41:30.447118 kubelet[2645]: I0130 13:41:30.446542 2645 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:41:30.447118 kubelet[2645]: I0130 13:41:30.446569 2645 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:41:30.447118 kubelet[2645]: I0130 13:41:30.446655 2645 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:41:30.447118 kubelet[2645]: I0130 13:41:30.446667 2645 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:41:30.447118 kubelet[2645]: I0130 13:41:30.446762 2645 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:41:30.447118 kubelet[2645]: I0130 13:41:30.446794 2645 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:41:30.459184 kubelet[2645]: I0130 13:41:30.459145 2645 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:41:30.462592 kubelet[2645]: I0130 13:41:30.460696 2645 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:41:30.462592 kubelet[2645]: I0130 13:41:30.461268 2645 server.go:1269] "Started kubelet" Jan 30 13:41:30.463698 kubelet[2645]: I0130 13:41:30.463653 2645 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:41:30.470052 kubelet[2645]: I0130 13:41:30.469241 2645 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:41:30.472282 kubelet[2645]: I0130 13:41:30.472214 2645 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:41:30.472667 kubelet[2645]: I0130 13:41:30.472654 2645 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:41:30.474769 kubelet[2645]: E0130 13:41:30.474740 2645 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:41:30.474866 kubelet[2645]: I0130 13:41:30.474855 2645 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:41:30.480636 kubelet[2645]: I0130 13:41:30.475000 2645 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:41:30.480889 kubelet[2645]: I0130 13:41:30.480877 2645 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:41:30.483166 kubelet[2645]: I0130 13:41:30.483152 2645 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:41:30.483339 kubelet[2645]: I0130 13:41:30.483328 2645 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:41:30.484939 kubelet[2645]: I0130 13:41:30.484891 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:41:30.486678 kubelet[2645]: I0130 13:41:30.486661 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:41:30.486763 kubelet[2645]: I0130 13:41:30.486754 2645 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:41:30.486928 kubelet[2645]: I0130 13:41:30.486887 2645 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:41:30.487075 kubelet[2645]: E0130 13:41:30.487057 2645 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:41:30.489581 kubelet[2645]: I0130 13:41:30.489553 2645 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:41:30.489823 kubelet[2645]: I0130 13:41:30.489655 2645 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:41:30.491891 kubelet[2645]: I0130 13:41:30.491872 2645 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:41:30.562634 kubelet[2645]: I0130 13:41:30.562608 2645 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:41:30.562634 kubelet[2645]: I0130 13:41:30.562624 2645 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:41:30.562634 kubelet[2645]: I0130 13:41:30.562643 2645 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:41:30.562810 kubelet[2645]: I0130 13:41:30.562787 2645 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:41:30.562810 kubelet[2645]: I0130 13:41:30.562798 2645 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:41:30.562870 kubelet[2645]: I0130 13:41:30.562815 2645 policy_none.go:49] "None policy: Start" Jan 30 13:41:30.564134 kubelet[2645]: I0130 13:41:30.563413 2645 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:41:30.564134 kubelet[2645]: I0130 13:41:30.563437 2645 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:41:30.564134 kubelet[2645]: I0130 13:41:30.563626 2645 state_mem.go:75] "Updated machine memory state" Jan 30 13:41:30.569313 kubelet[2645]: I0130 13:41:30.569289 2645 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:41:30.569457 kubelet[2645]: I0130 13:41:30.569437 2645 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:41:30.569509 kubelet[2645]: I0130 13:41:30.569453 2645 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:41:30.572591 kubelet[2645]: I0130 13:41:30.570885 2645 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:41:30.596465 kubelet[2645]: W0130 13:41:30.596424 2645 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:41:30.600303 kubelet[2645]: W0130 13:41:30.600281 2645 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:41:30.600562 kubelet[2645]: W0130 13:41:30.600530 2645 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:41:30.629419 sudo[2678]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:41:30.629711 sudo[2678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:41:30.677854 kubelet[2645]: I0130 13:41:30.677444 2645 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:30.684384 kubelet[2645]: I0130 13:41:30.684088 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7ea6b1c59fd62cfed53c023e042a91d-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"c7ea6b1c59fd62cfed53c023e042a91d\") " pod="kube-system/kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:30.684384 kubelet[2645]: I0130 13:41:30.684330 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7118de20152d5b9b77a7c166b2ed7fd1-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"7118de20152d5b9b77a7c166b2ed7fd1\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:30.684718 kubelet[2645]: I0130 13:41:30.684360 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7118de20152d5b9b77a7c166b2ed7fd1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"7118de20152d5b9b77a7c166b2ed7fd1\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:30.684718 kubelet[2645]: I0130 13:41:30.684583 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d4b30f7dcb9ba5ee362124520d8e16e-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"1d4b30f7dcb9ba5ee362124520d8e16e\") " pod="kube-system/kube-scheduler-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:30.684718 kubelet[2645]: I0130 13:41:30.684712 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7ea6b1c59fd62cfed53c023e042a91d-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"c7ea6b1c59fd62cfed53c023e042a91d\") " pod="kube-system/kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:30.684816 kubelet[2645]: I0130 13:41:30.684739 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7ea6b1c59fd62cfed53c023e042a91d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"c7ea6b1c59fd62cfed53c023e042a91d\") " pod="kube-system/kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:30.685031 kubelet[2645]: I0130 13:41:30.684894 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7118de20152d5b9b77a7c166b2ed7fd1-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"7118de20152d5b9b77a7c166b2ed7fd1\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:30.685632 kubelet[2645]: I0130 13:41:30.685065 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7118de20152d5b9b77a7c166b2ed7fd1-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"7118de20152d5b9b77a7c166b2ed7fd1\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:30.685632 kubelet[2645]: I0130 13:41:30.685212 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7118de20152d5b9b77a7c166b2ed7fd1-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal\" (UID: \"7118de20152d5b9b77a7c166b2ed7fd1\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:30.688801 kubelet[2645]: I0130 13:41:30.688476 2645 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:30.688801 kubelet[2645]: I0130 13:41:30.688547 2645 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:31.189556 sudo[2678]: pam_unix(sudo:session): session closed for user root Jan 30 13:41:31.449890 kubelet[2645]: I0130 13:41:31.449047 2645 apiserver.go:52] "Watching apiserver" Jan 30 13:41:31.483789 kubelet[2645]: I0130 13:41:31.483752 2645 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:41:31.536071 kubelet[2645]: W0130 13:41:31.536002 2645 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:41:31.536303 kubelet[2645]: E0130 13:41:31.536189 2645 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal" Jan 30 13:41:31.568896 kubelet[2645]: I0130 13:41:31.568743 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-f-d1cd2b53be.novalocal" podStartSLOduration=1.568726515 podStartE2EDuration="1.568726515s" podCreationTimestamp="2025-01-30 13:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:41:31.557824757 +0000 UTC m=+1.188155327" watchObservedRunningTime="2025-01-30 13:41:31.568726515 +0000 UTC m=+1.199057095" Jan 30 13:41:31.580594 kubelet[2645]: I0130 13:41:31.580128 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-f-d1cd2b53be.novalocal" podStartSLOduration=1.580114444 podStartE2EDuration="1.580114444s" podCreationTimestamp="2025-01-30 13:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:41:31.569473867 +0000 UTC m=+1.199804447" watchObservedRunningTime="2025-01-30 13:41:31.580114444 +0000 UTC m=+1.210445014" Jan 30 13:41:31.593217 kubelet[2645]: I0130 13:41:31.593152 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-f-d1cd2b53be.novalocal" podStartSLOduration=1.593134154 podStartE2EDuration="1.593134154s" podCreationTimestamp="2025-01-30 13:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:41:31.580903664 +0000 UTC m=+1.211234234" watchObservedRunningTime="2025-01-30 13:41:31.593134154 +0000 UTC m=+1.223464734" Jan 30 13:41:33.314340 sudo[1717]: pam_unix(sudo:session): session closed for user root Jan 30 13:41:33.534661 sshd[1716]: Connection closed by 172.24.4.1 port 50952 Jan 30 13:41:33.535809 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Jan 30 13:41:33.544183 systemd[1]: sshd@8-172.24.4.90:22-172.24.4.1:50952.service: Deactivated successfully. Jan 30 13:41:33.549719 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:41:33.550211 systemd[1]: session-11.scope: Consumed 7.409s CPU time, 154.6M memory peak, 0B memory swap peak. Jan 30 13:41:33.552153 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:41:33.556994 systemd-logind[1440]: Removed session 11. Jan 30 13:41:34.296777 kubelet[2645]: I0130 13:41:34.296690 2645 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:41:34.297479 containerd[1462]: time="2025-01-30T13:41:34.297428991Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:41:34.298908 kubelet[2645]: I0130 13:41:34.298006 2645 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:41:35.261594 systemd[1]: Created slice kubepods-besteffort-pod67dbac0b_71f0_4947_9ddb_22defb570384.slice - libcontainer container kubepods-besteffort-pod67dbac0b_71f0_4947_9ddb_22defb570384.slice. Jan 30 13:41:35.299812 systemd[1]: Created slice kubepods-burstable-podd5d745c7_2a29_4bdb_9abd_13b391268950.slice - libcontainer container kubepods-burstable-podd5d745c7_2a29_4bdb_9abd_13b391268950.slice. Jan 30 13:41:35.315247 kubelet[2645]: I0130 13:41:35.315204 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-hostproc\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.315247 kubelet[2645]: I0130 13:41:35.315244 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-etc-cni-netd\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315270 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-lib-modules\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315291 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5d745c7-2a29-4bdb-9abd-13b391268950-clustermesh-secrets\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315311 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-host-proc-sys-kernel\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315338 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5d745c7-2a29-4bdb-9abd-13b391268950-hubble-tls\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315358 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5d745c7-2a29-4bdb-9abd-13b391268950-cilium-config-path\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315383 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-host-proc-sys-net\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315407 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j25jz\" (UniqueName: \"kubernetes.io/projected/67dbac0b-71f0-4947-9ddb-22defb570384-kube-api-access-j25jz\") pod \"kube-proxy-94dv4\" (UID: \"67dbac0b-71f0-4947-9ddb-22defb570384\") " pod="kube-system/kube-proxy-94dv4" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315428 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/67dbac0b-71f0-4947-9ddb-22defb570384-kube-proxy\") pod \"kube-proxy-94dv4\" (UID: \"67dbac0b-71f0-4947-9ddb-22defb570384\") " pod="kube-system/kube-proxy-94dv4" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315453 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-cilium-run\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315474 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67dbac0b-71f0-4947-9ddb-22defb570384-xtables-lock\") pod \"kube-proxy-94dv4\" (UID: \"67dbac0b-71f0-4947-9ddb-22defb570384\") " pod="kube-system/kube-proxy-94dv4" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315494 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-bpf-maps\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315522 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-cilium-cgroup\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315540 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-cni-path\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315560 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-xtables-lock\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.315622 kubelet[2645]: I0130 13:41:35.315578 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87pp6\" (UniqueName: \"kubernetes.io/projected/d5d745c7-2a29-4bdb-9abd-13b391268950-kube-api-access-87pp6\") pod \"cilium-d5mzg\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " pod="kube-system/cilium-d5mzg" Jan 30 13:41:35.316042 kubelet[2645]: I0130 13:41:35.315605 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67dbac0b-71f0-4947-9ddb-22defb570384-lib-modules\") pod \"kube-proxy-94dv4\" (UID: \"67dbac0b-71f0-4947-9ddb-22defb570384\") " pod="kube-system/kube-proxy-94dv4" Jan 30 13:41:35.403446 systemd[1]: Created slice kubepods-besteffort-podeca617aa_a431_437c_adc3_e28355e2413c.slice - libcontainer container kubepods-besteffort-podeca617aa_a431_437c_adc3_e28355e2413c.slice. Jan 30 13:41:35.417652 kubelet[2645]: I0130 13:41:35.417199 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eca617aa-a431-437c-adc3-e28355e2413c-cilium-config-path\") pod \"cilium-operator-5d85765b45-76d9z\" (UID: \"eca617aa-a431-437c-adc3-e28355e2413c\") " pod="kube-system/cilium-operator-5d85765b45-76d9z" Jan 30 13:41:35.417652 kubelet[2645]: I0130 13:41:35.417375 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7222\" (UniqueName: \"kubernetes.io/projected/eca617aa-a431-437c-adc3-e28355e2413c-kube-api-access-w7222\") pod \"cilium-operator-5d85765b45-76d9z\" (UID: \"eca617aa-a431-437c-adc3-e28355e2413c\") " pod="kube-system/cilium-operator-5d85765b45-76d9z" Jan 30 13:41:35.572375 containerd[1462]: time="2025-01-30T13:41:35.572146771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-94dv4,Uid:67dbac0b-71f0-4947-9ddb-22defb570384,Namespace:kube-system,Attempt:0,}" Jan 30 13:41:35.606541 containerd[1462]: time="2025-01-30T13:41:35.606343009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d5mzg,Uid:d5d745c7-2a29-4bdb-9abd-13b391268950,Namespace:kube-system,Attempt:0,}" Jan 30 13:41:35.641725 containerd[1462]: time="2025-01-30T13:41:35.641547618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:35.641725 containerd[1462]: time="2025-01-30T13:41:35.641647816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:35.642341 containerd[1462]: time="2025-01-30T13:41:35.641805462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:35.642341 containerd[1462]: time="2025-01-30T13:41:35.642132575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:35.686691 containerd[1462]: time="2025-01-30T13:41:35.686542901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:35.688159 containerd[1462]: time="2025-01-30T13:41:35.686630555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:35.688159 containerd[1462]: time="2025-01-30T13:41:35.687275765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:35.688159 containerd[1462]: time="2025-01-30T13:41:35.687385671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:35.686839 systemd[1]: Started cri-containerd-3833fb67b6d26e0a955e2aabbc1a12cd7f4c365d6212f02ea6743e4fa07c1a5a.scope - libcontainer container 3833fb67b6d26e0a955e2aabbc1a12cd7f4c365d6212f02ea6743e4fa07c1a5a. Jan 30 13:41:35.709487 containerd[1462]: time="2025-01-30T13:41:35.708135319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-76d9z,Uid:eca617aa-a431-437c-adc3-e28355e2413c,Namespace:kube-system,Attempt:0,}" Jan 30 13:41:35.717219 systemd[1]: Started cri-containerd-89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0.scope - libcontainer container 89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0. Jan 30 13:41:35.724286 containerd[1462]: time="2025-01-30T13:41:35.724251928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-94dv4,Uid:67dbac0b-71f0-4947-9ddb-22defb570384,Namespace:kube-system,Attempt:0,} returns sandbox id \"3833fb67b6d26e0a955e2aabbc1a12cd7f4c365d6212f02ea6743e4fa07c1a5a\"" Jan 30 13:41:35.729025 containerd[1462]: time="2025-01-30T13:41:35.728989593Z" level=info msg="CreateContainer within sandbox \"3833fb67b6d26e0a955e2aabbc1a12cd7f4c365d6212f02ea6743e4fa07c1a5a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:41:35.748165 containerd[1462]: time="2025-01-30T13:41:35.748127378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d5mzg,Uid:d5d745c7-2a29-4bdb-9abd-13b391268950,Namespace:kube-system,Attempt:0,} returns sandbox id \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\"" Jan 30 13:41:35.750781 containerd[1462]: time="2025-01-30T13:41:35.750692569Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:41:35.789210 containerd[1462]: time="2025-01-30T13:41:35.789158375Z" level=info msg="CreateContainer within sandbox \"3833fb67b6d26e0a955e2aabbc1a12cd7f4c365d6212f02ea6743e4fa07c1a5a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b4bbd738784911de37e974874018757ee2db7abb1b8e2db455dc4bb4433a7d1\"" Jan 30 13:41:35.790064 containerd[1462]: time="2025-01-30T13:41:35.790037394Z" level=info msg="StartContainer for \"8b4bbd738784911de37e974874018757ee2db7abb1b8e2db455dc4bb4433a7d1\"" Jan 30 13:41:35.796344 containerd[1462]: time="2025-01-30T13:41:35.796171118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:35.796774 containerd[1462]: time="2025-01-30T13:41:35.796683339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:35.796774 containerd[1462]: time="2025-01-30T13:41:35.796703617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:35.797027 containerd[1462]: time="2025-01-30T13:41:35.796935993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:35.816146 systemd[1]: Started cri-containerd-1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239.scope - libcontainer container 1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239. Jan 30 13:41:35.831126 systemd[1]: Started cri-containerd-8b4bbd738784911de37e974874018757ee2db7abb1b8e2db455dc4bb4433a7d1.scope - libcontainer container 8b4bbd738784911de37e974874018757ee2db7abb1b8e2db455dc4bb4433a7d1. Jan 30 13:41:35.872079 containerd[1462]: time="2025-01-30T13:41:35.872030279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-76d9z,Uid:eca617aa-a431-437c-adc3-e28355e2413c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\"" Jan 30 13:41:35.880124 containerd[1462]: time="2025-01-30T13:41:35.879451127Z" level=info msg="StartContainer for \"8b4bbd738784911de37e974874018757ee2db7abb1b8e2db455dc4bb4433a7d1\" returns successfully" Jan 30 13:41:36.591215 kubelet[2645]: I0130 13:41:36.590555 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-94dv4" podStartSLOduration=1.590521659 podStartE2EDuration="1.590521659s" podCreationTimestamp="2025-01-30 13:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:41:36.58940771 +0000 UTC m=+6.219738381" watchObservedRunningTime="2025-01-30 13:41:36.590521659 +0000 UTC m=+6.220852280" Jan 30 13:41:41.120471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2960948498.mount: Deactivated successfully. Jan 30 13:41:43.706662 containerd[1462]: time="2025-01-30T13:41:43.706622396Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:43.709254 containerd[1462]: time="2025-01-30T13:41:43.709168481Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:41:43.710784 containerd[1462]: time="2025-01-30T13:41:43.710649969Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:43.713982 containerd[1462]: time="2025-01-30T13:41:43.713862154Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.962805532s" Jan 30 13:41:43.713982 containerd[1462]: time="2025-01-30T13:41:43.713891058Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:41:43.717789 containerd[1462]: time="2025-01-30T13:41:43.717615002Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:41:43.718770 containerd[1462]: time="2025-01-30T13:41:43.718625938Z" level=info msg="CreateContainer within sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:41:43.748119 containerd[1462]: time="2025-01-30T13:41:43.748089940Z" level=info msg="CreateContainer within sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"88cb1dea2f5558d44adc57f235ef143f23e8b66181cfd542836bb3c7e4663886\"" Jan 30 13:41:43.750051 containerd[1462]: time="2025-01-30T13:41:43.749353790Z" level=info msg="StartContainer for \"88cb1dea2f5558d44adc57f235ef143f23e8b66181cfd542836bb3c7e4663886\"" Jan 30 13:41:43.806106 systemd[1]: Started cri-containerd-88cb1dea2f5558d44adc57f235ef143f23e8b66181cfd542836bb3c7e4663886.scope - libcontainer container 88cb1dea2f5558d44adc57f235ef143f23e8b66181cfd542836bb3c7e4663886. Jan 30 13:41:43.836210 containerd[1462]: time="2025-01-30T13:41:43.836167344Z" level=info msg="StartContainer for \"88cb1dea2f5558d44adc57f235ef143f23e8b66181cfd542836bb3c7e4663886\" returns successfully" Jan 30 13:41:43.843772 systemd[1]: cri-containerd-88cb1dea2f5558d44adc57f235ef143f23e8b66181cfd542836bb3c7e4663886.scope: Deactivated successfully. Jan 30 13:41:44.739171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88cb1dea2f5558d44adc57f235ef143f23e8b66181cfd542836bb3c7e4663886-rootfs.mount: Deactivated successfully. Jan 30 13:41:44.879163 containerd[1462]: time="2025-01-30T13:41:44.879002652Z" level=info msg="shim disconnected" id=88cb1dea2f5558d44adc57f235ef143f23e8b66181cfd542836bb3c7e4663886 namespace=k8s.io Jan 30 13:41:44.879163 containerd[1462]: time="2025-01-30T13:41:44.879124130Z" level=warning msg="cleaning up after shim disconnected" id=88cb1dea2f5558d44adc57f235ef143f23e8b66181cfd542836bb3c7e4663886 namespace=k8s.io Jan 30 13:41:44.879163 containerd[1462]: time="2025-01-30T13:41:44.879150730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:41:45.600707 containerd[1462]: time="2025-01-30T13:41:45.600631402Z" level=info msg="CreateContainer within sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:41:45.650160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165805507.mount: Deactivated successfully. Jan 30 13:41:45.657845 containerd[1462]: time="2025-01-30T13:41:45.657624184Z" level=info msg="CreateContainer within sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"03ae6ca8a56acf826755b89bfb9ab73ff76c940b448f54ab1c5b8f30ff1f6634\"" Jan 30 13:41:45.659687 containerd[1462]: time="2025-01-30T13:41:45.658900958Z" level=info msg="StartContainer for \"03ae6ca8a56acf826755b89bfb9ab73ff76c940b448f54ab1c5b8f30ff1f6634\"" Jan 30 13:41:45.704108 systemd[1]: Started cri-containerd-03ae6ca8a56acf826755b89bfb9ab73ff76c940b448f54ab1c5b8f30ff1f6634.scope - libcontainer container 03ae6ca8a56acf826755b89bfb9ab73ff76c940b448f54ab1c5b8f30ff1f6634. Jan 30 13:41:45.737105 containerd[1462]: time="2025-01-30T13:41:45.737064638Z" level=info msg="StartContainer for \"03ae6ca8a56acf826755b89bfb9ab73ff76c940b448f54ab1c5b8f30ff1f6634\" returns successfully" Jan 30 13:41:45.745651 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:41:45.745928 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:41:45.746188 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:41:45.751265 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:41:45.751481 systemd[1]: cri-containerd-03ae6ca8a56acf826755b89bfb9ab73ff76c940b448f54ab1c5b8f30ff1f6634.scope: Deactivated successfully. Jan 30 13:41:45.773085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03ae6ca8a56acf826755b89bfb9ab73ff76c940b448f54ab1c5b8f30ff1f6634-rootfs.mount: Deactivated successfully. Jan 30 13:41:45.775807 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:41:45.786751 containerd[1462]: time="2025-01-30T13:41:45.786695573Z" level=info msg="shim disconnected" id=03ae6ca8a56acf826755b89bfb9ab73ff76c940b448f54ab1c5b8f30ff1f6634 namespace=k8s.io Jan 30 13:41:45.786751 containerd[1462]: time="2025-01-30T13:41:45.786747090Z" level=warning msg="cleaning up after shim disconnected" id=03ae6ca8a56acf826755b89bfb9ab73ff76c940b448f54ab1c5b8f30ff1f6634 namespace=k8s.io Jan 30 13:41:45.786993 containerd[1462]: time="2025-01-30T13:41:45.786757449Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:41:46.367266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1606432404.mount: Deactivated successfully. Jan 30 13:41:46.606568 containerd[1462]: time="2025-01-30T13:41:46.606421495Z" level=info msg="CreateContainer within sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:41:46.684032 containerd[1462]: time="2025-01-30T13:41:46.683591360Z" level=info msg="CreateContainer within sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5a4bdbacb21bf96469d687d507292c1413b7961e42054e85c55ee8ba74f070ee\"" Jan 30 13:41:46.686544 containerd[1462]: time="2025-01-30T13:41:46.684491859Z" level=info msg="StartContainer for \"5a4bdbacb21bf96469d687d507292c1413b7961e42054e85c55ee8ba74f070ee\"" Jan 30 13:41:46.724090 systemd[1]: Started cri-containerd-5a4bdbacb21bf96469d687d507292c1413b7961e42054e85c55ee8ba74f070ee.scope - libcontainer container 5a4bdbacb21bf96469d687d507292c1413b7961e42054e85c55ee8ba74f070ee. Jan 30 13:41:46.769532 systemd[1]: cri-containerd-5a4bdbacb21bf96469d687d507292c1413b7961e42054e85c55ee8ba74f070ee.scope: Deactivated successfully. Jan 30 13:41:46.771292 containerd[1462]: time="2025-01-30T13:41:46.770864515Z" level=info msg="StartContainer for \"5a4bdbacb21bf96469d687d507292c1413b7961e42054e85c55ee8ba74f070ee\" returns successfully" Jan 30 13:41:46.800212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a4bdbacb21bf96469d687d507292c1413b7961e42054e85c55ee8ba74f070ee-rootfs.mount: Deactivated successfully. Jan 30 13:41:46.812434 containerd[1462]: time="2025-01-30T13:41:46.812385850Z" level=info msg="shim disconnected" id=5a4bdbacb21bf96469d687d507292c1413b7961e42054e85c55ee8ba74f070ee namespace=k8s.io Jan 30 13:41:46.812749 containerd[1462]: time="2025-01-30T13:41:46.812599771Z" level=warning msg="cleaning up after shim disconnected" id=5a4bdbacb21bf96469d687d507292c1413b7961e42054e85c55ee8ba74f070ee namespace=k8s.io Jan 30 13:41:46.812749 containerd[1462]: time="2025-01-30T13:41:46.812617134Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:41:46.825303 containerd[1462]: time="2025-01-30T13:41:46.825269584Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:41:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:41:47.584370 containerd[1462]: time="2025-01-30T13:41:47.584305818Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:47.586179 containerd[1462]: time="2025-01-30T13:41:47.586144696Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:41:47.587937 containerd[1462]: time="2025-01-30T13:41:47.587763863Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:41:47.589368 containerd[1462]: time="2025-01-30T13:41:47.589327726Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.871524211s" Jan 30 13:41:47.589422 containerd[1462]: time="2025-01-30T13:41:47.589369384Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:41:47.592532 containerd[1462]: time="2025-01-30T13:41:47.592502551Z" level=info msg="CreateContainer within sandbox \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:41:47.613888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2255941996.mount: Deactivated successfully. Jan 30 13:41:47.625128 containerd[1462]: time="2025-01-30T13:41:47.624821066Z" level=info msg="CreateContainer within sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:41:47.631702 containerd[1462]: time="2025-01-30T13:41:47.631595331Z" level=info msg="CreateContainer within sandbox \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869\"" Jan 30 13:41:47.635375 containerd[1462]: time="2025-01-30T13:41:47.635333061Z" level=info msg="StartContainer for \"e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869\"" Jan 30 13:41:47.655665 containerd[1462]: time="2025-01-30T13:41:47.655520374Z" level=info msg="CreateContainer within sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cd5aa89de36956d677aaeb17f70a45752d3639e5e56590b3d41ff0bb65000525\"" Jan 30 13:41:47.657782 containerd[1462]: time="2025-01-30T13:41:47.657745346Z" level=info msg="StartContainer for \"cd5aa89de36956d677aaeb17f70a45752d3639e5e56590b3d41ff0bb65000525\"" Jan 30 13:41:47.666129 systemd[1]: Started cri-containerd-e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869.scope - libcontainer container e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869. Jan 30 13:41:47.688178 systemd[1]: Started cri-containerd-cd5aa89de36956d677aaeb17f70a45752d3639e5e56590b3d41ff0bb65000525.scope - libcontainer container cd5aa89de36956d677aaeb17f70a45752d3639e5e56590b3d41ff0bb65000525. Jan 30 13:41:47.710411 containerd[1462]: time="2025-01-30T13:41:47.710348546Z" level=info msg="StartContainer for \"e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869\" returns successfully" Jan 30 13:41:47.726001 systemd[1]: cri-containerd-cd5aa89de36956d677aaeb17f70a45752d3639e5e56590b3d41ff0bb65000525.scope: Deactivated successfully. Jan 30 13:41:47.729902 containerd[1462]: time="2025-01-30T13:41:47.729858618Z" level=info msg="StartContainer for \"cd5aa89de36956d677aaeb17f70a45752d3639e5e56590b3d41ff0bb65000525\" returns successfully" Jan 30 13:41:48.032304 containerd[1462]: time="2025-01-30T13:41:48.032165957Z" level=info msg="shim disconnected" id=cd5aa89de36956d677aaeb17f70a45752d3639e5e56590b3d41ff0bb65000525 namespace=k8s.io Jan 30 13:41:48.032304 containerd[1462]: time="2025-01-30T13:41:48.032266005Z" level=warning msg="cleaning up after shim disconnected" id=cd5aa89de36956d677aaeb17f70a45752d3639e5e56590b3d41ff0bb65000525 namespace=k8s.io Jan 30 13:41:48.032304 containerd[1462]: time="2025-01-30T13:41:48.032288427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:41:48.611647 containerd[1462]: time="2025-01-30T13:41:48.611600278Z" level=info msg="CreateContainer within sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:41:48.638161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3552579887.mount: Deactivated successfully. Jan 30 13:41:48.643385 containerd[1462]: time="2025-01-30T13:41:48.642892599Z" level=info msg="CreateContainer within sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442\"" Jan 30 13:41:48.644986 containerd[1462]: time="2025-01-30T13:41:48.643798057Z" level=info msg="StartContainer for \"39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442\"" Jan 30 13:41:48.686126 systemd[1]: Started cri-containerd-39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442.scope - libcontainer container 39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442. Jan 30 13:41:48.709200 kubelet[2645]: I0130 13:41:48.709149 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-76d9z" podStartSLOduration=1.9926147589999998 podStartE2EDuration="13.709104281s" podCreationTimestamp="2025-01-30 13:41:35 +0000 UTC" firstStartedPulling="2025-01-30 13:41:35.874132582 +0000 UTC m=+5.504463212" lastFinishedPulling="2025-01-30 13:41:47.590622164 +0000 UTC m=+17.220952734" observedRunningTime="2025-01-30 13:41:48.668649878 +0000 UTC m=+18.298980488" watchObservedRunningTime="2025-01-30 13:41:48.709104281 +0000 UTC m=+18.339434881" Jan 30 13:41:48.729872 containerd[1462]: time="2025-01-30T13:41:48.729827488Z" level=info msg="StartContainer for \"39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442\" returns successfully" Jan 30 13:41:48.808037 kubelet[2645]: I0130 13:41:48.807178 2645 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:41:48.844817 systemd[1]: Created slice kubepods-burstable-pod2ec35682_398e_4ac5_913f_34d580fb5a28.slice - libcontainer container kubepods-burstable-pod2ec35682_398e_4ac5_913f_34d580fb5a28.slice. Jan 30 13:41:48.852965 systemd[1]: Created slice kubepods-burstable-pod283381c4_0cc7_4b6f_a37e_ef6b1ff309f7.slice - libcontainer container kubepods-burstable-pod283381c4_0cc7_4b6f_a37e_ef6b1ff309f7.slice. Jan 30 13:41:49.020323 kubelet[2645]: I0130 13:41:49.020221 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ec35682-398e-4ac5-913f-34d580fb5a28-config-volume\") pod \"coredns-6f6b679f8f-hnj9j\" (UID: \"2ec35682-398e-4ac5-913f-34d580fb5a28\") " pod="kube-system/coredns-6f6b679f8f-hnj9j" Jan 30 13:41:49.020555 kubelet[2645]: I0130 13:41:49.020481 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/283381c4-0cc7-4b6f-a37e-ef6b1ff309f7-config-volume\") pod \"coredns-6f6b679f8f-b9cmp\" (UID: \"283381c4-0cc7-4b6f-a37e-ef6b1ff309f7\") " pod="kube-system/coredns-6f6b679f8f-b9cmp" Jan 30 13:41:49.020734 kubelet[2645]: I0130 13:41:49.020647 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7rss\" (UniqueName: \"kubernetes.io/projected/2ec35682-398e-4ac5-913f-34d580fb5a28-kube-api-access-d7rss\") pod \"coredns-6f6b679f8f-hnj9j\" (UID: \"2ec35682-398e-4ac5-913f-34d580fb5a28\") " pod="kube-system/coredns-6f6b679f8f-hnj9j" Jan 30 13:41:49.020734 kubelet[2645]: I0130 13:41:49.020700 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qfdc\" (UniqueName: \"kubernetes.io/projected/283381c4-0cc7-4b6f-a37e-ef6b1ff309f7-kube-api-access-5qfdc\") pod \"coredns-6f6b679f8f-b9cmp\" (UID: \"283381c4-0cc7-4b6f-a37e-ef6b1ff309f7\") " pod="kube-system/coredns-6f6b679f8f-b9cmp" Jan 30 13:41:49.150231 containerd[1462]: time="2025-01-30T13:41:49.150169445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hnj9j,Uid:2ec35682-398e-4ac5-913f-34d580fb5a28,Namespace:kube-system,Attempt:0,}" Jan 30 13:41:49.158017 containerd[1462]: time="2025-01-30T13:41:49.157905955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b9cmp,Uid:283381c4-0cc7-4b6f-a37e-ef6b1ff309f7,Namespace:kube-system,Attempt:0,}" Jan 30 13:41:49.663683 kubelet[2645]: I0130 13:41:49.663135 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d5mzg" podStartSLOduration=6.696911598 podStartE2EDuration="14.663103651s" podCreationTimestamp="2025-01-30 13:41:35 +0000 UTC" firstStartedPulling="2025-01-30 13:41:35.750026681 +0000 UTC m=+5.380357251" lastFinishedPulling="2025-01-30 13:41:43.716218734 +0000 UTC m=+13.346549304" observedRunningTime="2025-01-30 13:41:49.662692821 +0000 UTC m=+19.293023531" watchObservedRunningTime="2025-01-30 13:41:49.663103651 +0000 UTC m=+19.293434332" Jan 30 13:41:51.701082 systemd-networkd[1367]: cilium_host: Link UP Jan 30 13:41:51.701443 systemd-networkd[1367]: cilium_net: Link UP Jan 30 13:41:51.702063 systemd-networkd[1367]: cilium_net: Gained carrier Jan 30 13:41:51.702407 systemd-networkd[1367]: cilium_host: Gained carrier Jan 30 13:41:51.763088 systemd-networkd[1367]: cilium_net: Gained IPv6LL Jan 30 13:41:51.805445 systemd-networkd[1367]: cilium_vxlan: Link UP Jan 30 13:41:51.805452 systemd-networkd[1367]: cilium_vxlan: Gained carrier Jan 30 13:41:51.822121 systemd-networkd[1367]: cilium_host: Gained IPv6LL Jan 30 13:41:52.070430 kernel: NET: Registered PF_ALG protocol family Jan 30 13:41:52.835398 systemd-networkd[1367]: lxc_health: Link UP Jan 30 13:41:52.846214 systemd-networkd[1367]: lxc_health: Gained carrier Jan 30 13:41:53.223287 systemd-networkd[1367]: lxc836bca8b6dc4: Link UP Jan 30 13:41:53.227057 kernel: eth0: renamed from tmp05d14 Jan 30 13:41:53.232474 systemd-networkd[1367]: lxc836bca8b6dc4: Gained carrier Jan 30 13:41:53.245107 systemd-networkd[1367]: lxcc29148212d12: Link UP Jan 30 13:41:53.251065 kernel: eth0: renamed from tmpc7eeb Jan 30 13:41:53.257991 systemd-networkd[1367]: lxcc29148212d12: Gained carrier Jan 30 13:41:53.558186 systemd-networkd[1367]: cilium_vxlan: Gained IPv6LL Jan 30 13:41:54.262292 systemd-networkd[1367]: lxc836bca8b6dc4: Gained IPv6LL Jan 30 13:41:54.582194 systemd-networkd[1367]: lxc_health: Gained IPv6LL Jan 30 13:41:54.616645 kubelet[2645]: I0130 13:41:54.616457 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:41:54.902216 systemd-networkd[1367]: lxcc29148212d12: Gained IPv6LL Jan 30 13:41:57.497876 containerd[1462]: time="2025-01-30T13:41:57.497581941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:57.497876 containerd[1462]: time="2025-01-30T13:41:57.497638838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:57.497876 containerd[1462]: time="2025-01-30T13:41:57.497657543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:57.497876 containerd[1462]: time="2025-01-30T13:41:57.497765425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:57.526984 systemd[1]: run-containerd-runc-k8s.io-05d141ea0f4ed180c024bce533ced7eb29bf76693b3067eb844e0dd7ef26b30c-runc.WfATfh.mount: Deactivated successfully. Jan 30 13:41:57.539151 systemd[1]: Started cri-containerd-05d141ea0f4ed180c024bce533ced7eb29bf76693b3067eb844e0dd7ef26b30c.scope - libcontainer container 05d141ea0f4ed180c024bce533ced7eb29bf76693b3067eb844e0dd7ef26b30c. Jan 30 13:41:57.599488 containerd[1462]: time="2025-01-30T13:41:57.599388171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:41:57.599488 containerd[1462]: time="2025-01-30T13:41:57.599449697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:41:57.599488 containerd[1462]: time="2025-01-30T13:41:57.599466288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:57.600122 containerd[1462]: time="2025-01-30T13:41:57.599544374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:41:57.621574 containerd[1462]: time="2025-01-30T13:41:57.621524889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hnj9j,Uid:2ec35682-398e-4ac5-913f-34d580fb5a28,Namespace:kube-system,Attempt:0,} returns sandbox id \"05d141ea0f4ed180c024bce533ced7eb29bf76693b3067eb844e0dd7ef26b30c\"" Jan 30 13:41:57.627802 containerd[1462]: time="2025-01-30T13:41:57.627664073Z" level=info msg="CreateContainer within sandbox \"05d141ea0f4ed180c024bce533ced7eb29bf76693b3067eb844e0dd7ef26b30c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:41:57.641340 systemd[1]: Started cri-containerd-c7eeb0d87af400ebcf63ab101f7789985348351b0af30e0171458f4abf379833.scope - libcontainer container c7eeb0d87af400ebcf63ab101f7789985348351b0af30e0171458f4abf379833. Jan 30 13:41:57.656428 containerd[1462]: time="2025-01-30T13:41:57.656254946Z" level=info msg="CreateContainer within sandbox \"05d141ea0f4ed180c024bce533ced7eb29bf76693b3067eb844e0dd7ef26b30c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b50bddafbffbacb4cf3f557c517e929e411fe336c6ef5c550ab5af4d2566e3d\"" Jan 30 13:41:57.657997 containerd[1462]: time="2025-01-30T13:41:57.657387901Z" level=info msg="StartContainer for \"2b50bddafbffbacb4cf3f557c517e929e411fe336c6ef5c550ab5af4d2566e3d\"" Jan 30 13:41:57.695146 systemd[1]: Started cri-containerd-2b50bddafbffbacb4cf3f557c517e929e411fe336c6ef5c550ab5af4d2566e3d.scope - libcontainer container 2b50bddafbffbacb4cf3f557c517e929e411fe336c6ef5c550ab5af4d2566e3d. Jan 30 13:41:57.745657 containerd[1462]: time="2025-01-30T13:41:57.745516909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b9cmp,Uid:283381c4-0cc7-4b6f-a37e-ef6b1ff309f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7eeb0d87af400ebcf63ab101f7789985348351b0af30e0171458f4abf379833\"" Jan 30 13:41:57.757544 containerd[1462]: time="2025-01-30T13:41:57.757436484Z" level=info msg="CreateContainer within sandbox \"c7eeb0d87af400ebcf63ab101f7789985348351b0af30e0171458f4abf379833\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:41:57.759210 containerd[1462]: time="2025-01-30T13:41:57.759188791Z" level=info msg="StartContainer for \"2b50bddafbffbacb4cf3f557c517e929e411fe336c6ef5c550ab5af4d2566e3d\" returns successfully" Jan 30 13:41:57.787979 containerd[1462]: time="2025-01-30T13:41:57.787916831Z" level=info msg="CreateContainer within sandbox \"c7eeb0d87af400ebcf63ab101f7789985348351b0af30e0171458f4abf379833\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"713a6bebc32c81782749a961023940cc63a06b4c9b85afc8e0717856bec58908\"" Jan 30 13:41:57.789374 containerd[1462]: time="2025-01-30T13:41:57.788919582Z" level=info msg="StartContainer for \"713a6bebc32c81782749a961023940cc63a06b4c9b85afc8e0717856bec58908\"" Jan 30 13:41:57.825102 systemd[1]: Started cri-containerd-713a6bebc32c81782749a961023940cc63a06b4c9b85afc8e0717856bec58908.scope - libcontainer container 713a6bebc32c81782749a961023940cc63a06b4c9b85afc8e0717856bec58908. Jan 30 13:41:57.859105 containerd[1462]: time="2025-01-30T13:41:57.859048079Z" level=info msg="StartContainer for \"713a6bebc32c81782749a961023940cc63a06b4c9b85afc8e0717856bec58908\" returns successfully" Jan 30 13:41:58.686011 kubelet[2645]: I0130 13:41:58.685378 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hnj9j" podStartSLOduration=23.685344895 podStartE2EDuration="23.685344895s" podCreationTimestamp="2025-01-30 13:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:41:58.683465771 +0000 UTC m=+28.313796471" watchObservedRunningTime="2025-01-30 13:41:58.685344895 +0000 UTC m=+28.315675515" Jan 30 13:41:58.742872 kubelet[2645]: I0130 13:41:58.742745 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-b9cmp" podStartSLOduration=23.742726585 podStartE2EDuration="23.742726585s" podCreationTimestamp="2025-01-30 13:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:41:58.742491224 +0000 UTC m=+28.372821804" watchObservedRunningTime="2025-01-30 13:41:58.742726585 +0000 UTC m=+28.373057155" Jan 30 13:42:59.713494 systemd[1]: Started sshd@9-172.24.4.90:22-172.24.4.1:41764.service - OpenSSH per-connection server daemon (172.24.4.1:41764). Jan 30 13:43:01.129518 sshd[4029]: Accepted publickey for core from 172.24.4.1 port 41764 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:43:01.132362 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:01.143414 systemd-logind[1440]: New session 12 of user core. Jan 30 13:43:01.153276 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:43:01.933778 sshd[4031]: Connection closed by 172.24.4.1 port 41764 Jan 30 13:43:01.933585 sshd-session[4029]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:01.940279 systemd[1]: sshd@9-172.24.4.90:22-172.24.4.1:41764.service: Deactivated successfully. Jan 30 13:43:01.942526 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:43:01.943412 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:43:01.945287 systemd-logind[1440]: Removed session 12. Jan 30 13:43:06.959704 systemd[1]: Started sshd@10-172.24.4.90:22-172.24.4.1:50890.service - OpenSSH per-connection server daemon (172.24.4.1:50890). Jan 30 13:43:08.254137 sshd[4046]: Accepted publickey for core from 172.24.4.1 port 50890 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:43:08.257559 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:08.268227 systemd-logind[1440]: New session 13 of user core. Jan 30 13:43:08.275313 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:43:09.030494 sshd[4048]: Connection closed by 172.24.4.1 port 50890 Jan 30 13:43:09.030319 sshd-session[4046]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:09.036136 systemd[1]: sshd@10-172.24.4.90:22-172.24.4.1:50890.service: Deactivated successfully. Jan 30 13:43:09.040034 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:43:09.042205 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:43:09.045063 systemd-logind[1440]: Removed session 13. Jan 30 13:43:14.055616 systemd[1]: Started sshd@11-172.24.4.90:22-172.24.4.1:58256.service - OpenSSH per-connection server daemon (172.24.4.1:58256). Jan 30 13:43:15.550088 sshd[4060]: Accepted publickey for core from 172.24.4.1 port 58256 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:43:15.555119 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:15.572532 systemd-logind[1440]: New session 14 of user core. Jan 30 13:43:15.581285 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:43:16.373026 sshd[4062]: Connection closed by 172.24.4.1 port 58256 Jan 30 13:43:16.372858 sshd-session[4060]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:16.379182 systemd[1]: sshd@11-172.24.4.90:22-172.24.4.1:58256.service: Deactivated successfully. Jan 30 13:43:16.383665 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:43:16.387934 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:43:16.390240 systemd-logind[1440]: Removed session 14. Jan 30 13:43:21.392488 systemd[1]: Started sshd@12-172.24.4.90:22-172.24.4.1:58270.service - OpenSSH per-connection server daemon (172.24.4.1:58270). Jan 30 13:43:22.935109 sshd[4074]: Accepted publickey for core from 172.24.4.1 port 58270 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:43:22.938365 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:22.948296 systemd-logind[1440]: New session 15 of user core. Jan 30 13:43:22.956285 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:43:23.697564 sshd[4076]: Connection closed by 172.24.4.1 port 58270 Jan 30 13:43:23.698820 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:23.713903 systemd[1]: sshd@12-172.24.4.90:22-172.24.4.1:58270.service: Deactivated successfully. Jan 30 13:43:23.718710 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:43:23.723621 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:43:23.731557 systemd[1]: Started sshd@13-172.24.4.90:22-172.24.4.1:33624.service - OpenSSH per-connection server daemon (172.24.4.1:33624). Jan 30 13:43:23.735785 systemd-logind[1440]: Removed session 15. Jan 30 13:43:25.235449 sshd[4088]: Accepted publickey for core from 172.24.4.1 port 33624 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:43:25.238125 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:25.247671 systemd-logind[1440]: New session 16 of user core. Jan 30 13:43:25.258291 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:43:26.111847 sshd[4090]: Connection closed by 172.24.4.1 port 33624 Jan 30 13:43:26.111583 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:26.128338 systemd[1]: sshd@13-172.24.4.90:22-172.24.4.1:33624.service: Deactivated successfully. Jan 30 13:43:26.132738 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:43:26.137485 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:43:26.146522 systemd[1]: Started sshd@14-172.24.4.90:22-172.24.4.1:33626.service - OpenSSH per-connection server daemon (172.24.4.1:33626). Jan 30 13:43:26.150907 systemd-logind[1440]: Removed session 16. Jan 30 13:43:27.407028 sshd[4099]: Accepted publickey for core from 172.24.4.1 port 33626 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:43:27.409666 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:27.421528 systemd-logind[1440]: New session 17 of user core. Jan 30 13:43:27.429314 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:43:28.296038 sshd[4101]: Connection closed by 172.24.4.1 port 33626 Jan 30 13:43:28.297072 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:28.302058 systemd[1]: sshd@14-172.24.4.90:22-172.24.4.1:33626.service: Deactivated successfully. Jan 30 13:43:28.306276 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:43:28.309848 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:43:28.312569 systemd-logind[1440]: Removed session 17. Jan 30 13:43:33.325733 systemd[1]: Started sshd@15-172.24.4.90:22-172.24.4.1:33628.service - OpenSSH per-connection server daemon (172.24.4.1:33628). Jan 30 13:43:34.558274 sshd[4114]: Accepted publickey for core from 172.24.4.1 port 33628 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:43:34.561049 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:34.577753 systemd-logind[1440]: New session 18 of user core. Jan 30 13:43:34.584265 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:43:35.345767 sshd[4116]: Connection closed by 172.24.4.1 port 33628 Jan 30 13:43:35.346354 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:35.358623 systemd[1]: sshd@15-172.24.4.90:22-172.24.4.1:33628.service: Deactivated successfully. Jan 30 13:43:35.362550 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:43:35.366316 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:43:35.373633 systemd[1]: Started sshd@16-172.24.4.90:22-172.24.4.1:34198.service - OpenSSH per-connection server daemon (172.24.4.1:34198). Jan 30 13:43:35.377469 systemd-logind[1440]: Removed session 18. Jan 30 13:43:36.666147 sshd[4127]: Accepted publickey for core from 172.24.4.1 port 34198 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:43:36.668946 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:36.684699 systemd-logind[1440]: New session 19 of user core. Jan 30 13:43:36.692287 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:43:37.580136 sshd[4131]: Connection closed by 172.24.4.1 port 34198 Jan 30 13:43:37.580750 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:37.596828 systemd[1]: sshd@16-172.24.4.90:22-172.24.4.1:34198.service: Deactivated successfully. Jan 30 13:43:37.601355 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:43:37.604502 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:43:37.614622 systemd[1]: Started sshd@17-172.24.4.90:22-172.24.4.1:34206.service - OpenSSH per-connection server daemon (172.24.4.1:34206). Jan 30 13:43:37.621471 systemd-logind[1440]: Removed session 19. Jan 30 13:43:38.903195 sshd[4140]: Accepted publickey for core from 172.24.4.1 port 34206 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:43:38.905941 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:38.917610 systemd-logind[1440]: New session 20 of user core. Jan 30 13:43:38.922284 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:43:41.664289 sshd[4142]: Connection closed by 172.24.4.1 port 34206 Jan 30 13:43:41.663944 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:41.678512 systemd[1]: sshd@17-172.24.4.90:22-172.24.4.1:34206.service: Deactivated successfully. Jan 30 13:43:41.683835 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:43:41.688085 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:43:41.695663 systemd[1]: Started sshd@18-172.24.4.90:22-172.24.4.1:34222.service - OpenSSH per-connection server daemon (172.24.4.1:34222). Jan 30 13:43:41.698501 systemd-logind[1440]: Removed session 20. Jan 30 13:43:42.885087 sshd[4158]: Accepted publickey for core from 172.24.4.1 port 34222 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:43:42.889856 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:42.901926 systemd-logind[1440]: New session 21 of user core. Jan 30 13:43:42.910286 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:43:43.848996 sshd[4160]: Connection closed by 172.24.4.1 port 34222 Jan 30 13:43:43.849353 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:43.866894 systemd[1]: sshd@18-172.24.4.90:22-172.24.4.1:34222.service: Deactivated successfully. Jan 30 13:43:43.870652 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:43:43.875719 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:43:43.883611 systemd[1]: Started sshd@19-172.24.4.90:22-172.24.4.1:47554.service - OpenSSH per-connection server daemon (172.24.4.1:47554). Jan 30 13:43:43.886764 systemd-logind[1440]: Removed session 21. Jan 30 13:43:45.376263 sshd[4169]: Accepted publickey for core from 172.24.4.1 port 47554 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:43:45.379552 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:45.391172 systemd-logind[1440]: New session 22 of user core. Jan 30 13:43:45.400345 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:43:46.209057 sshd[4171]: Connection closed by 172.24.4.1 port 47554 Jan 30 13:43:46.210090 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:46.216574 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:43:46.218159 systemd[1]: sshd@19-172.24.4.90:22-172.24.4.1:47554.service: Deactivated successfully. Jan 30 13:43:46.222743 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:43:46.226521 systemd-logind[1440]: Removed session 22. Jan 30 13:43:51.233663 systemd[1]: Started sshd@20-172.24.4.90:22-172.24.4.1:47566.service - OpenSSH per-connection server daemon (172.24.4.1:47566). Jan 30 13:43:52.638530 sshd[4185]: Accepted publickey for core from 172.24.4.1 port 47566 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:43:52.641165 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:52.652540 systemd-logind[1440]: New session 23 of user core. Jan 30 13:43:52.658282 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:43:53.303220 sshd[4187]: Connection closed by 172.24.4.1 port 47566 Jan 30 13:43:53.304331 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:53.311857 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:43:53.312025 systemd[1]: sshd@20-172.24.4.90:22-172.24.4.1:47566.service: Deactivated successfully. Jan 30 13:43:53.318873 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:43:53.324154 systemd-logind[1440]: Removed session 23. Jan 30 13:43:58.328517 systemd[1]: Started sshd@21-172.24.4.90:22-172.24.4.1:46348.service - OpenSSH per-connection server daemon (172.24.4.1:46348). Jan 30 13:43:59.527160 sshd[4199]: Accepted publickey for core from 172.24.4.1 port 46348 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:43:59.530678 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:59.541568 systemd-logind[1440]: New session 24 of user core. Jan 30 13:43:59.551246 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:44:00.231103 sshd[4201]: Connection closed by 172.24.4.1 port 46348 Jan 30 13:44:00.232190 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:00.239667 systemd[1]: sshd@21-172.24.4.90:22-172.24.4.1:46348.service: Deactivated successfully. Jan 30 13:44:00.244533 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:44:00.246284 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:44:00.248519 systemd-logind[1440]: Removed session 24. Jan 30 13:44:05.254525 systemd[1]: Started sshd@22-172.24.4.90:22-172.24.4.1:42292.service - OpenSSH per-connection server daemon (172.24.4.1:42292). Jan 30 13:44:06.428621 sshd[4212]: Accepted publickey for core from 172.24.4.1 port 42292 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:44:06.431267 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:06.441065 systemd-logind[1440]: New session 25 of user core. Jan 30 13:44:06.451274 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:44:07.175662 sshd[4216]: Connection closed by 172.24.4.1 port 42292 Jan 30 13:44:07.176061 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:07.185641 systemd[1]: sshd@22-172.24.4.90:22-172.24.4.1:42292.service: Deactivated successfully. Jan 30 13:44:07.188872 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:44:07.191248 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:44:07.201584 systemd[1]: Started sshd@23-172.24.4.90:22-172.24.4.1:42306.service - OpenSSH per-connection server daemon (172.24.4.1:42306). Jan 30 13:44:07.203853 systemd-logind[1440]: Removed session 25. Jan 30 13:44:08.245729 sshd[4226]: Accepted publickey for core from 172.24.4.1 port 42306 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:44:08.248546 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:08.259089 systemd-logind[1440]: New session 26 of user core. Jan 30 13:44:08.266219 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:44:10.912977 containerd[1462]: time="2025-01-30T13:44:10.912717413Z" level=info msg="StopContainer for \"e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869\" with timeout 30 (s)" Jan 30 13:44:10.913993 containerd[1462]: time="2025-01-30T13:44:10.913863063Z" level=info msg="Stop container \"e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869\" with signal terminated" Jan 30 13:44:10.920565 containerd[1462]: time="2025-01-30T13:44:10.920464006Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:44:10.929734 systemd[1]: cri-containerd-e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869.scope: Deactivated successfully. Jan 30 13:44:10.930507 containerd[1462]: time="2025-01-30T13:44:10.930193259Z" level=info msg="StopContainer for \"39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442\" with timeout 2 (s)" Jan 30 13:44:10.931131 containerd[1462]: time="2025-01-30T13:44:10.930944919Z" level=info msg="Stop container \"39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442\" with signal terminated" Jan 30 13:44:10.941573 systemd-networkd[1367]: lxc_health: Link DOWN Jan 30 13:44:10.941580 systemd-networkd[1367]: lxc_health: Lost carrier Jan 30 13:44:10.952691 systemd[1]: cri-containerd-39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442.scope: Deactivated successfully. Jan 30 13:44:10.953598 systemd[1]: cri-containerd-39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442.scope: Consumed 8.246s CPU time. Jan 30 13:44:10.962511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869-rootfs.mount: Deactivated successfully. Jan 30 13:44:10.982263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442-rootfs.mount: Deactivated successfully. Jan 30 13:44:10.988542 containerd[1462]: time="2025-01-30T13:44:10.988464270Z" level=info msg="shim disconnected" id=39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442 namespace=k8s.io Jan 30 13:44:10.988542 containerd[1462]: time="2025-01-30T13:44:10.988516738Z" level=warning msg="cleaning up after shim disconnected" id=39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442 namespace=k8s.io Jan 30 13:44:10.988542 containerd[1462]: time="2025-01-30T13:44:10.988526647Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:44:10.989573 containerd[1462]: time="2025-01-30T13:44:10.989473263Z" level=info msg="shim disconnected" id=e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869 namespace=k8s.io Jan 30 13:44:10.989783 containerd[1462]: time="2025-01-30T13:44:10.989707242Z" level=warning msg="cleaning up after shim disconnected" id=e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869 namespace=k8s.io Jan 30 13:44:10.989783 containerd[1462]: time="2025-01-30T13:44:10.989723783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:44:11.017763 containerd[1462]: time="2025-01-30T13:44:11.017662808Z" level=info msg="StopContainer for \"e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869\" returns successfully" Jan 30 13:44:11.019983 containerd[1462]: time="2025-01-30T13:44:11.018647165Z" level=info msg="StopPodSandbox for \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\"" Jan 30 13:44:11.019983 containerd[1462]: time="2025-01-30T13:44:11.018697529Z" level=info msg="Container to stop \"e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:44:11.020249 containerd[1462]: time="2025-01-30T13:44:11.020227409Z" level=info msg="StopContainer for \"39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442\" returns successfully" Jan 30 13:44:11.020794 containerd[1462]: time="2025-01-30T13:44:11.020736043Z" level=info msg="StopPodSandbox for \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\"" Jan 30 13:44:11.020794 containerd[1462]: time="2025-01-30T13:44:11.020775157Z" level=info msg="Container to stop \"39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:44:11.022185 containerd[1462]: time="2025-01-30T13:44:11.020796046Z" level=info msg="Container to stop \"88cb1dea2f5558d44adc57f235ef143f23e8b66181cfd542836bb3c7e4663886\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:44:11.022185 containerd[1462]: time="2025-01-30T13:44:11.020807878Z" level=info msg="Container to stop \"5a4bdbacb21bf96469d687d507292c1413b7961e42054e85c55ee8ba74f070ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:44:11.022185 containerd[1462]: time="2025-01-30T13:44:11.020818498Z" level=info msg="Container to stop \"03ae6ca8a56acf826755b89bfb9ab73ff76c940b448f54ab1c5b8f30ff1f6634\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:44:11.022185 containerd[1462]: time="2025-01-30T13:44:11.020828627Z" level=info msg="Container to stop \"cd5aa89de36956d677aaeb17f70a45752d3639e5e56590b3d41ff0bb65000525\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:44:11.021083 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239-shm.mount: Deactivated successfully. Jan 30 13:44:11.027268 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0-shm.mount: Deactivated successfully. Jan 30 13:44:11.032454 systemd[1]: cri-containerd-89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0.scope: Deactivated successfully. Jan 30 13:44:11.034559 systemd[1]: cri-containerd-1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239.scope: Deactivated successfully. Jan 30 13:44:11.099567 containerd[1462]: time="2025-01-30T13:44:11.099481786Z" level=info msg="shim disconnected" id=1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239 namespace=k8s.io Jan 30 13:44:11.099567 containerd[1462]: time="2025-01-30T13:44:11.099560764Z" level=warning msg="cleaning up after shim disconnected" id=1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239 namespace=k8s.io Jan 30 13:44:11.099567 containerd[1462]: time="2025-01-30T13:44:11.099571474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:44:11.102849 containerd[1462]: time="2025-01-30T13:44:11.102568697Z" level=info msg="shim disconnected" id=89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0 namespace=k8s.io Jan 30 13:44:11.102849 containerd[1462]: time="2025-01-30T13:44:11.102614874Z" level=warning msg="cleaning up after shim disconnected" id=89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0 namespace=k8s.io Jan 30 13:44:11.102849 containerd[1462]: time="2025-01-30T13:44:11.102626055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:44:11.119393 containerd[1462]: time="2025-01-30T13:44:11.119340642Z" level=info msg="TearDown network for sandbox \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\" successfully" Jan 30 13:44:11.119393 containerd[1462]: time="2025-01-30T13:44:11.119377481Z" level=info msg="StopPodSandbox for \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\" returns successfully" Jan 30 13:44:11.120486 containerd[1462]: time="2025-01-30T13:44:11.120449793Z" level=info msg="TearDown network for sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" successfully" Jan 30 13:44:11.120486 containerd[1462]: time="2025-01-30T13:44:11.120472826Z" level=info msg="StopPodSandbox for \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" returns successfully" Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233627 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-lib-modules\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233679 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5d745c7-2a29-4bdb-9abd-13b391268950-clustermesh-secrets\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233700 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-host-proc-sys-kernel\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233723 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5d745c7-2a29-4bdb-9abd-13b391268950-cilium-config-path\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233744 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-host-proc-sys-net\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233764 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-hostproc\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233780 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-bpf-maps\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233803 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87pp6\" (UniqueName: \"kubernetes.io/projected/d5d745c7-2a29-4bdb-9abd-13b391268950-kube-api-access-87pp6\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233822 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-etc-cni-netd\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233842 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7222\" (UniqueName: \"kubernetes.io/projected/eca617aa-a431-437c-adc3-e28355e2413c-kube-api-access-w7222\") pod \"eca617aa-a431-437c-adc3-e28355e2413c\" (UID: \"eca617aa-a431-437c-adc3-e28355e2413c\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233862 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5d745c7-2a29-4bdb-9abd-13b391268950-hubble-tls\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233879 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-cilium-cgroup\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233898 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eca617aa-a431-437c-adc3-e28355e2413c-cilium-config-path\") pod \"eca617aa-a431-437c-adc3-e28355e2413c\" (UID: \"eca617aa-a431-437c-adc3-e28355e2413c\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233915 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-xtables-lock\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233932 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-cilium-run\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.234483 kubelet[2645]: I0130 13:44:11.233976 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-cni-path\") pod \"d5d745c7-2a29-4bdb-9abd-13b391268950\" (UID: \"d5d745c7-2a29-4bdb-9abd-13b391268950\") " Jan 30 13:44:11.235629 kubelet[2645]: I0130 13:44:11.234045 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-cni-path" (OuterVolumeSpecName: "cni-path") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:44:11.235629 kubelet[2645]: I0130 13:44:11.234083 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:44:11.235629 kubelet[2645]: I0130 13:44:11.234099 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:44:11.242034 kubelet[2645]: I0130 13:44:11.240870 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:44:11.242938 kubelet[2645]: I0130 13:44:11.242438 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:44:11.243136 kubelet[2645]: I0130 13:44:11.242462 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:44:11.243208 kubelet[2645]: I0130 13:44:11.243145 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:44:11.246030 kubelet[2645]: I0130 13:44:11.242804 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:44:11.246102 kubelet[2645]: I0130 13:44:11.246076 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-hostproc" (OuterVolumeSpecName: "hostproc") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:44:11.246161 kubelet[2645]: I0130 13:44:11.246128 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:44:11.246328 kubelet[2645]: I0130 13:44:11.246284 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d745c7-2a29-4bdb-9abd-13b391268950-kube-api-access-87pp6" (OuterVolumeSpecName: "kube-api-access-87pp6") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "kube-api-access-87pp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:44:11.246395 kubelet[2645]: I0130 13:44:11.246301 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5d745c7-2a29-4bdb-9abd-13b391268950-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:44:11.246479 kubelet[2645]: I0130 13:44:11.246435 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d745c7-2a29-4bdb-9abd-13b391268950-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:44:11.246649 kubelet[2645]: I0130 13:44:11.246594 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca617aa-a431-437c-adc3-e28355e2413c-kube-api-access-w7222" (OuterVolumeSpecName: "kube-api-access-w7222") pod "eca617aa-a431-437c-adc3-e28355e2413c" (UID: "eca617aa-a431-437c-adc3-e28355e2413c"). InnerVolumeSpecName "kube-api-access-w7222". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:44:11.248511 kubelet[2645]: I0130 13:44:11.248467 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca617aa-a431-437c-adc3-e28355e2413c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eca617aa-a431-437c-adc3-e28355e2413c" (UID: "eca617aa-a431-437c-adc3-e28355e2413c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:44:11.248778 kubelet[2645]: I0130 13:44:11.248678 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d745c7-2a29-4bdb-9abd-13b391268950-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d5d745c7-2a29-4bdb-9abd-13b391268950" (UID: "d5d745c7-2a29-4bdb-9abd-13b391268950"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:44:11.335240 kubelet[2645]: I0130 13:44:11.335141 2645 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-xtables-lock\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335240 kubelet[2645]: I0130 13:44:11.335228 2645 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-cilium-run\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335240 kubelet[2645]: I0130 13:44:11.335258 2645 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-cni-path\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335622 kubelet[2645]: I0130 13:44:11.335284 2645 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5d745c7-2a29-4bdb-9abd-13b391268950-cilium-config-path\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335622 kubelet[2645]: I0130 13:44:11.335312 2645 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-host-proc-sys-net\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335622 kubelet[2645]: I0130 13:44:11.335336 2645 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-hostproc\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335622 kubelet[2645]: I0130 13:44:11.335360 2645 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-lib-modules\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335622 kubelet[2645]: I0130 13:44:11.335383 2645 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5d745c7-2a29-4bdb-9abd-13b391268950-clustermesh-secrets\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335622 kubelet[2645]: I0130 13:44:11.335406 2645 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-host-proc-sys-kernel\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335622 kubelet[2645]: I0130 13:44:11.335430 2645 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-bpf-maps\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335622 kubelet[2645]: I0130 13:44:11.335455 2645 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-87pp6\" (UniqueName: \"kubernetes.io/projected/d5d745c7-2a29-4bdb-9abd-13b391268950-kube-api-access-87pp6\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335622 kubelet[2645]: I0130 13:44:11.335479 2645 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-etc-cni-netd\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335622 kubelet[2645]: I0130 13:44:11.335503 2645 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w7222\" (UniqueName: \"kubernetes.io/projected/eca617aa-a431-437c-adc3-e28355e2413c-kube-api-access-w7222\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335622 kubelet[2645]: I0130 13:44:11.335528 2645 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5d745c7-2a29-4bdb-9abd-13b391268950-hubble-tls\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335622 kubelet[2645]: I0130 13:44:11.335554 2645 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5d745c7-2a29-4bdb-9abd-13b391268950-cilium-cgroup\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.335622 kubelet[2645]: I0130 13:44:11.335576 2645 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eca617aa-a431-437c-adc3-e28355e2413c-cilium-config-path\") on node \"ci-4186-1-0-f-d1cd2b53be.novalocal\" DevicePath \"\"" Jan 30 13:44:11.899478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239-rootfs.mount: Deactivated successfully. Jan 30 13:44:11.899708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0-rootfs.mount: Deactivated successfully. Jan 30 13:44:11.899877 systemd[1]: var-lib-kubelet-pods-eca617aa\x2da431\x2d437c\x2dadc3\x2de28355e2413c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw7222.mount: Deactivated successfully. Jan 30 13:44:11.900138 systemd[1]: var-lib-kubelet-pods-d5d745c7\x2d2a29\x2d4bdb\x2d9abd\x2d13b391268950-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d87pp6.mount: Deactivated successfully. Jan 30 13:44:11.900309 systemd[1]: var-lib-kubelet-pods-d5d745c7\x2d2a29\x2d4bdb\x2d9abd\x2d13b391268950-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:44:11.900475 systemd[1]: var-lib-kubelet-pods-d5d745c7\x2d2a29\x2d4bdb\x2d9abd\x2d13b391268950-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:44:12.079126 kubelet[2645]: I0130 13:44:12.078863 2645 scope.go:117] "RemoveContainer" containerID="e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869" Jan 30 13:44:12.085809 containerd[1462]: time="2025-01-30T13:44:12.085725788Z" level=info msg="RemoveContainer for \"e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869\"" Jan 30 13:44:12.092594 systemd[1]: Removed slice kubepods-besteffort-podeca617aa_a431_437c_adc3_e28355e2413c.slice - libcontainer container kubepods-besteffort-podeca617aa_a431_437c_adc3_e28355e2413c.slice. Jan 30 13:44:12.116557 containerd[1462]: time="2025-01-30T13:44:12.115767338Z" level=info msg="RemoveContainer for \"e4494c196c436d7206f9fa57c00bbd4694f2cdbe80ed0a019ff4de1883326869\" returns successfully" Jan 30 13:44:12.117841 kubelet[2645]: I0130 13:44:12.117065 2645 scope.go:117] "RemoveContainer" containerID="39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442" Jan 30 13:44:12.118625 systemd[1]: Removed slice kubepods-burstable-podd5d745c7_2a29_4bdb_9abd_13b391268950.slice - libcontainer container kubepods-burstable-podd5d745c7_2a29_4bdb_9abd_13b391268950.slice. Jan 30 13:44:12.118816 systemd[1]: kubepods-burstable-podd5d745c7_2a29_4bdb_9abd_13b391268950.slice: Consumed 8.339s CPU time. Jan 30 13:44:12.121404 containerd[1462]: time="2025-01-30T13:44:12.120656383Z" level=info msg="RemoveContainer for \"39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442\"" Jan 30 13:44:12.128478 containerd[1462]: time="2025-01-30T13:44:12.128370371Z" level=info msg="RemoveContainer for \"39ff5c3cd9232709895b1e2610ef579c208a8cf12d44bc0f7c9724c860997442\" returns successfully" Jan 30 13:44:12.129784 kubelet[2645]: I0130 13:44:12.129593 2645 scope.go:117] "RemoveContainer" containerID="cd5aa89de36956d677aaeb17f70a45752d3639e5e56590b3d41ff0bb65000525" Jan 30 13:44:12.133295 containerd[1462]: time="2025-01-30T13:44:12.132847042Z" level=info msg="RemoveContainer for \"cd5aa89de36956d677aaeb17f70a45752d3639e5e56590b3d41ff0bb65000525\"" Jan 30 13:44:12.139509 containerd[1462]: time="2025-01-30T13:44:12.139425698Z" level=info msg="RemoveContainer for \"cd5aa89de36956d677aaeb17f70a45752d3639e5e56590b3d41ff0bb65000525\" returns successfully" Jan 30 13:44:12.140441 kubelet[2645]: I0130 13:44:12.139844 2645 scope.go:117] "RemoveContainer" containerID="5a4bdbacb21bf96469d687d507292c1413b7961e42054e85c55ee8ba74f070ee" Jan 30 13:44:12.149679 containerd[1462]: time="2025-01-30T13:44:12.149051994Z" level=info msg="RemoveContainer for \"5a4bdbacb21bf96469d687d507292c1413b7961e42054e85c55ee8ba74f070ee\"" Jan 30 13:44:12.158560 containerd[1462]: time="2025-01-30T13:44:12.158369521Z" level=info msg="RemoveContainer for \"5a4bdbacb21bf96469d687d507292c1413b7961e42054e85c55ee8ba74f070ee\" returns successfully" Jan 30 13:44:12.160377 kubelet[2645]: I0130 13:44:12.160331 2645 scope.go:117] "RemoveContainer" containerID="03ae6ca8a56acf826755b89bfb9ab73ff76c940b448f54ab1c5b8f30ff1f6634" Jan 30 13:44:12.162372 containerd[1462]: time="2025-01-30T13:44:12.162111854Z" level=info msg="RemoveContainer for \"03ae6ca8a56acf826755b89bfb9ab73ff76c940b448f54ab1c5b8f30ff1f6634\"" Jan 30 13:44:12.166690 containerd[1462]: time="2025-01-30T13:44:12.166585991Z" level=info msg="RemoveContainer for \"03ae6ca8a56acf826755b89bfb9ab73ff76c940b448f54ab1c5b8f30ff1f6634\" returns successfully" Jan 30 13:44:12.167201 kubelet[2645]: I0130 13:44:12.167061 2645 scope.go:117] "RemoveContainer" containerID="88cb1dea2f5558d44adc57f235ef143f23e8b66181cfd542836bb3c7e4663886" Jan 30 13:44:12.169406 containerd[1462]: time="2025-01-30T13:44:12.168732609Z" level=info msg="RemoveContainer for \"88cb1dea2f5558d44adc57f235ef143f23e8b66181cfd542836bb3c7e4663886\"" Jan 30 13:44:12.174559 containerd[1462]: time="2025-01-30T13:44:12.174525080Z" level=info msg="RemoveContainer for \"88cb1dea2f5558d44adc57f235ef143f23e8b66181cfd542836bb3c7e4663886\" returns successfully" Jan 30 13:44:12.497139 kubelet[2645]: I0130 13:44:12.495204 2645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5d745c7-2a29-4bdb-9abd-13b391268950" path="/var/lib/kubelet/pods/d5d745c7-2a29-4bdb-9abd-13b391268950/volumes" Jan 30 13:44:12.497139 kubelet[2645]: I0130 13:44:12.496679 2645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca617aa-a431-437c-adc3-e28355e2413c" path="/var/lib/kubelet/pods/eca617aa-a431-437c-adc3-e28355e2413c/volumes" Jan 30 13:44:13.024022 sshd[4228]: Connection closed by 172.24.4.1 port 42306 Jan 30 13:44:13.024371 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:13.035456 systemd[1]: sshd@23-172.24.4.90:22-172.24.4.1:42306.service: Deactivated successfully. Jan 30 13:44:13.038421 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:44:13.038676 systemd[1]: session-26.scope: Consumed 1.623s CPU time. Jan 30 13:44:13.040835 systemd-logind[1440]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:44:13.047735 systemd[1]: Started sshd@24-172.24.4.90:22-172.24.4.1:42322.service - OpenSSH per-connection server daemon (172.24.4.1:42322). Jan 30 13:44:13.053045 systemd-logind[1440]: Removed session 26. Jan 30 13:44:14.587220 sshd[4385]: Accepted publickey for core from 172.24.4.1 port 42322 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:44:14.590028 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:14.603125 systemd-logind[1440]: New session 27 of user core. Jan 30 13:44:14.611353 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:44:15.627035 kubelet[2645]: E0130 13:44:15.626900 2645 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:44:15.708538 kubelet[2645]: E0130 13:44:15.707580 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5d745c7-2a29-4bdb-9abd-13b391268950" containerName="mount-cgroup" Jan 30 13:44:15.708538 kubelet[2645]: E0130 13:44:15.707606 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5d745c7-2a29-4bdb-9abd-13b391268950" containerName="clean-cilium-state" Jan 30 13:44:15.708538 kubelet[2645]: E0130 13:44:15.707614 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5d745c7-2a29-4bdb-9abd-13b391268950" containerName="cilium-agent" Jan 30 13:44:15.708538 kubelet[2645]: E0130 13:44:15.707622 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5d745c7-2a29-4bdb-9abd-13b391268950" containerName="apply-sysctl-overwrites" Jan 30 13:44:15.708538 kubelet[2645]: E0130 13:44:15.707628 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5d745c7-2a29-4bdb-9abd-13b391268950" containerName="mount-bpf-fs" Jan 30 13:44:15.708538 kubelet[2645]: E0130 13:44:15.707635 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eca617aa-a431-437c-adc3-e28355e2413c" containerName="cilium-operator" Jan 30 13:44:15.708538 kubelet[2645]: I0130 13:44:15.707660 2645 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5d745c7-2a29-4bdb-9abd-13b391268950" containerName="cilium-agent" Jan 30 13:44:15.708538 kubelet[2645]: I0130 13:44:15.707667 2645 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca617aa-a431-437c-adc3-e28355e2413c" containerName="cilium-operator" Jan 30 13:44:15.719752 systemd[1]: Created slice kubepods-burstable-pod5450e817_1f22_4d49_9b55_c18ceaaa945a.slice - libcontainer container kubepods-burstable-pod5450e817_1f22_4d49_9b55_c18ceaaa945a.slice. Jan 30 13:44:15.871055 kubelet[2645]: I0130 13:44:15.870888 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5450e817-1f22-4d49-9b55-c18ceaaa945a-host-proc-sys-net\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871055 kubelet[2645]: I0130 13:44:15.871018 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5450e817-1f22-4d49-9b55-c18ceaaa945a-hubble-tls\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871369 kubelet[2645]: I0130 13:44:15.871102 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5450e817-1f22-4d49-9b55-c18ceaaa945a-cilium-run\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871369 kubelet[2645]: I0130 13:44:15.871153 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5450e817-1f22-4d49-9b55-c18ceaaa945a-etc-cni-netd\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871369 kubelet[2645]: I0130 13:44:15.871196 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5450e817-1f22-4d49-9b55-c18ceaaa945a-lib-modules\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871369 kubelet[2645]: I0130 13:44:15.871279 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5450e817-1f22-4d49-9b55-c18ceaaa945a-hostproc\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871369 kubelet[2645]: I0130 13:44:15.871321 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5450e817-1f22-4d49-9b55-c18ceaaa945a-clustermesh-secrets\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871675 kubelet[2645]: I0130 13:44:15.871366 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd5zc\" (UniqueName: \"kubernetes.io/projected/5450e817-1f22-4d49-9b55-c18ceaaa945a-kube-api-access-cd5zc\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871675 kubelet[2645]: I0130 13:44:15.871411 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5450e817-1f22-4d49-9b55-c18ceaaa945a-cilium-ipsec-secrets\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871675 kubelet[2645]: I0130 13:44:15.871456 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5450e817-1f22-4d49-9b55-c18ceaaa945a-xtables-lock\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871675 kubelet[2645]: I0130 13:44:15.871500 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5450e817-1f22-4d49-9b55-c18ceaaa945a-cilium-config-path\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871675 kubelet[2645]: I0130 13:44:15.871544 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5450e817-1f22-4d49-9b55-c18ceaaa945a-host-proc-sys-kernel\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871675 kubelet[2645]: I0130 13:44:15.871586 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5450e817-1f22-4d49-9b55-c18ceaaa945a-cni-path\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871675 kubelet[2645]: I0130 13:44:15.871629 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5450e817-1f22-4d49-9b55-c18ceaaa945a-bpf-maps\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.871675 kubelet[2645]: I0130 13:44:15.871669 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5450e817-1f22-4d49-9b55-c18ceaaa945a-cilium-cgroup\") pod \"cilium-9ddnn\" (UID: \"5450e817-1f22-4d49-9b55-c18ceaaa945a\") " pod="kube-system/cilium-9ddnn" Jan 30 13:44:15.949076 sshd[4387]: Connection closed by 172.24.4.1 port 42322 Jan 30 13:44:15.950406 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:15.967157 systemd[1]: sshd@24-172.24.4.90:22-172.24.4.1:42322.service: Deactivated successfully. Jan 30 13:44:15.972914 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:44:15.979080 systemd-logind[1440]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:44:15.991947 systemd[1]: Started sshd@25-172.24.4.90:22-172.24.4.1:57710.service - OpenSSH per-connection server daemon (172.24.4.1:57710). Jan 30 13:44:16.028717 systemd-logind[1440]: Removed session 27. Jan 30 13:44:16.326108 containerd[1462]: time="2025-01-30T13:44:16.325925318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ddnn,Uid:5450e817-1f22-4d49-9b55-c18ceaaa945a,Namespace:kube-system,Attempt:0,}" Jan 30 13:44:16.437096 containerd[1462]: time="2025-01-30T13:44:16.436442462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:44:16.437096 containerd[1462]: time="2025-01-30T13:44:16.436808871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:44:16.437096 containerd[1462]: time="2025-01-30T13:44:16.436916925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:44:16.439111 containerd[1462]: time="2025-01-30T13:44:16.438912529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:44:16.483110 systemd[1]: Started cri-containerd-4f0aac86bcd97037927d1355dadcee7fca28c4da9ef271f18c245c8c188cd8c4.scope - libcontainer container 4f0aac86bcd97037927d1355dadcee7fca28c4da9ef271f18c245c8c188cd8c4. Jan 30 13:44:16.511608 containerd[1462]: time="2025-01-30T13:44:16.511569861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ddnn,Uid:5450e817-1f22-4d49-9b55-c18ceaaa945a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f0aac86bcd97037927d1355dadcee7fca28c4da9ef271f18c245c8c188cd8c4\"" Jan 30 13:44:16.516146 containerd[1462]: time="2025-01-30T13:44:16.516080379Z" level=info msg="CreateContainer within sandbox \"4f0aac86bcd97037927d1355dadcee7fca28c4da9ef271f18c245c8c188cd8c4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:44:16.535179 containerd[1462]: time="2025-01-30T13:44:16.534849908Z" level=info msg="CreateContainer within sandbox \"4f0aac86bcd97037927d1355dadcee7fca28c4da9ef271f18c245c8c188cd8c4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9ebe479052c4d0cc344b2ca275802d7fb1eaf0d052a9b064b06be88af7221c1c\"" Jan 30 13:44:16.536178 containerd[1462]: time="2025-01-30T13:44:16.536146518Z" level=info msg="StartContainer for \"9ebe479052c4d0cc344b2ca275802d7fb1eaf0d052a9b064b06be88af7221c1c\"" Jan 30 13:44:16.567123 systemd[1]: Started cri-containerd-9ebe479052c4d0cc344b2ca275802d7fb1eaf0d052a9b064b06be88af7221c1c.scope - libcontainer container 9ebe479052c4d0cc344b2ca275802d7fb1eaf0d052a9b064b06be88af7221c1c. Jan 30 13:44:16.604934 systemd[1]: cri-containerd-9ebe479052c4d0cc344b2ca275802d7fb1eaf0d052a9b064b06be88af7221c1c.scope: Deactivated successfully. Jan 30 13:44:16.610281 containerd[1462]: time="2025-01-30T13:44:16.610138901Z" level=info msg="StartContainer for \"9ebe479052c4d0cc344b2ca275802d7fb1eaf0d052a9b064b06be88af7221c1c\" returns successfully" Jan 30 13:44:16.656097 containerd[1462]: time="2025-01-30T13:44:16.656044754Z" level=info msg="shim disconnected" id=9ebe479052c4d0cc344b2ca275802d7fb1eaf0d052a9b064b06be88af7221c1c namespace=k8s.io Jan 30 13:44:16.656097 containerd[1462]: time="2025-01-30T13:44:16.656132409Z" level=warning msg="cleaning up after shim disconnected" id=9ebe479052c4d0cc344b2ca275802d7fb1eaf0d052a9b064b06be88af7221c1c namespace=k8s.io Jan 30 13:44:16.656097 containerd[1462]: time="2025-01-30T13:44:16.656144231Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:44:17.131285 containerd[1462]: time="2025-01-30T13:44:17.131198916Z" level=info msg="CreateContainer within sandbox \"4f0aac86bcd97037927d1355dadcee7fca28c4da9ef271f18c245c8c188cd8c4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:44:17.168678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1268942511.mount: Deactivated successfully. Jan 30 13:44:17.176326 containerd[1462]: time="2025-01-30T13:44:17.175667110Z" level=info msg="CreateContainer within sandbox \"4f0aac86bcd97037927d1355dadcee7fca28c4da9ef271f18c245c8c188cd8c4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ff6a627fc976b66213d37bfa90743e083e988e11db48e9850ad25ba65546f2e8\"" Jan 30 13:44:17.181462 containerd[1462]: time="2025-01-30T13:44:17.181282306Z" level=info msg="StartContainer for \"ff6a627fc976b66213d37bfa90743e083e988e11db48e9850ad25ba65546f2e8\"" Jan 30 13:44:17.228102 systemd[1]: Started cri-containerd-ff6a627fc976b66213d37bfa90743e083e988e11db48e9850ad25ba65546f2e8.scope - libcontainer container ff6a627fc976b66213d37bfa90743e083e988e11db48e9850ad25ba65546f2e8. Jan 30 13:44:17.366690 systemd[1]: cri-containerd-ff6a627fc976b66213d37bfa90743e083e988e11db48e9850ad25ba65546f2e8.scope: Deactivated successfully. Jan 30 13:44:17.368220 containerd[1462]: time="2025-01-30T13:44:17.366920713Z" level=info msg="StartContainer for \"ff6a627fc976b66213d37bfa90743e083e988e11db48e9850ad25ba65546f2e8\" returns successfully" Jan 30 13:44:17.392154 sshd[4400]: Accepted publickey for core from 172.24.4.1 port 57710 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:44:17.396478 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:17.411144 systemd-logind[1440]: New session 28 of user core. Jan 30 13:44:17.427350 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 13:44:17.578131 containerd[1462]: time="2025-01-30T13:44:17.577817768Z" level=info msg="shim disconnected" id=ff6a627fc976b66213d37bfa90743e083e988e11db48e9850ad25ba65546f2e8 namespace=k8s.io Jan 30 13:44:17.578131 containerd[1462]: time="2025-01-30T13:44:17.577912636Z" level=warning msg="cleaning up after shim disconnected" id=ff6a627fc976b66213d37bfa90743e083e988e11db48e9850ad25ba65546f2e8 namespace=k8s.io Jan 30 13:44:17.578131 containerd[1462]: time="2025-01-30T13:44:17.577934487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:44:17.609132 containerd[1462]: time="2025-01-30T13:44:17.608849659Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:44:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:44:17.918895 sshd[4552]: Connection closed by 172.24.4.1 port 57710 Jan 30 13:44:17.918691 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:17.931747 systemd[1]: sshd@25-172.24.4.90:22-172.24.4.1:57710.service: Deactivated successfully. Jan 30 13:44:17.935837 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 13:44:17.938553 systemd-logind[1440]: Session 28 logged out. Waiting for processes to exit. Jan 30 13:44:17.942851 systemd-logind[1440]: Removed session 28. Jan 30 13:44:17.950220 systemd[1]: Started sshd@26-172.24.4.90:22-172.24.4.1:57724.service - OpenSSH per-connection server daemon (172.24.4.1:57724). Jan 30 13:44:18.001259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff6a627fc976b66213d37bfa90743e083e988e11db48e9850ad25ba65546f2e8-rootfs.mount: Deactivated successfully. Jan 30 13:44:18.136034 containerd[1462]: time="2025-01-30T13:44:18.135907038Z" level=info msg="CreateContainer within sandbox \"4f0aac86bcd97037927d1355dadcee7fca28c4da9ef271f18c245c8c188cd8c4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:44:18.194712 containerd[1462]: time="2025-01-30T13:44:18.194472062Z" level=info msg="CreateContainer within sandbox \"4f0aac86bcd97037927d1355dadcee7fca28c4da9ef271f18c245c8c188cd8c4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1de6282d37d03f985ec77f54a30036565333f7d7843a3deaf2747c84ab2ada8d\"" Jan 30 13:44:18.197036 containerd[1462]: time="2025-01-30T13:44:18.196873519Z" level=info msg="StartContainer for \"1de6282d37d03f985ec77f54a30036565333f7d7843a3deaf2747c84ab2ada8d\"" Jan 30 13:44:18.251118 systemd[1]: Started cri-containerd-1de6282d37d03f985ec77f54a30036565333f7d7843a3deaf2747c84ab2ada8d.scope - libcontainer container 1de6282d37d03f985ec77f54a30036565333f7d7843a3deaf2747c84ab2ada8d. Jan 30 13:44:18.282613 systemd[1]: cri-containerd-1de6282d37d03f985ec77f54a30036565333f7d7843a3deaf2747c84ab2ada8d.scope: Deactivated successfully. Jan 30 13:44:18.285020 containerd[1462]: time="2025-01-30T13:44:18.284931567Z" level=info msg="StartContainer for \"1de6282d37d03f985ec77f54a30036565333f7d7843a3deaf2747c84ab2ada8d\" returns successfully" Jan 30 13:44:18.314282 containerd[1462]: time="2025-01-30T13:44:18.314089369Z" level=info msg="shim disconnected" id=1de6282d37d03f985ec77f54a30036565333f7d7843a3deaf2747c84ab2ada8d namespace=k8s.io Jan 30 13:44:18.314282 containerd[1462]: time="2025-01-30T13:44:18.314158850Z" level=warning msg="cleaning up after shim disconnected" id=1de6282d37d03f985ec77f54a30036565333f7d7843a3deaf2747c84ab2ada8d namespace=k8s.io Jan 30 13:44:18.314282 containerd[1462]: time="2025-01-30T13:44:18.314174138Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:44:19.001100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1de6282d37d03f985ec77f54a30036565333f7d7843a3deaf2747c84ab2ada8d-rootfs.mount: Deactivated successfully. Jan 30 13:44:19.174882 containerd[1462]: time="2025-01-30T13:44:19.174623254Z" level=info msg="CreateContainer within sandbox \"4f0aac86bcd97037927d1355dadcee7fca28c4da9ef271f18c245c8c188cd8c4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:44:19.209583 containerd[1462]: time="2025-01-30T13:44:19.209487781Z" level=info msg="CreateContainer within sandbox \"4f0aac86bcd97037927d1355dadcee7fca28c4da9ef271f18c245c8c188cd8c4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"70fb854d2f33c2e8702a3477c919ad3d9dccd345f0fe113e38a95551dda2cdfd\"" Jan 30 13:44:19.210135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2266861367.mount: Deactivated successfully. Jan 30 13:44:19.213094 containerd[1462]: time="2025-01-30T13:44:19.211892035Z" level=info msg="StartContainer for \"70fb854d2f33c2e8702a3477c919ad3d9dccd345f0fe113e38a95551dda2cdfd\"" Jan 30 13:44:19.250130 systemd[1]: Started cri-containerd-70fb854d2f33c2e8702a3477c919ad3d9dccd345f0fe113e38a95551dda2cdfd.scope - libcontainer container 70fb854d2f33c2e8702a3477c919ad3d9dccd345f0fe113e38a95551dda2cdfd. Jan 30 13:44:19.274559 systemd[1]: cri-containerd-70fb854d2f33c2e8702a3477c919ad3d9dccd345f0fe113e38a95551dda2cdfd.scope: Deactivated successfully. Jan 30 13:44:19.280280 containerd[1462]: time="2025-01-30T13:44:19.280248036Z" level=info msg="StartContainer for \"70fb854d2f33c2e8702a3477c919ad3d9dccd345f0fe113e38a95551dda2cdfd\" returns successfully" Jan 30 13:44:19.307662 containerd[1462]: time="2025-01-30T13:44:19.307590181Z" level=info msg="shim disconnected" id=70fb854d2f33c2e8702a3477c919ad3d9dccd345f0fe113e38a95551dda2cdfd namespace=k8s.io Jan 30 13:44:19.307662 containerd[1462]: time="2025-01-30T13:44:19.307641236Z" level=warning msg="cleaning up after shim disconnected" id=70fb854d2f33c2e8702a3477c919ad3d9dccd345f0fe113e38a95551dda2cdfd namespace=k8s.io Jan 30 13:44:19.307662 containerd[1462]: time="2025-01-30T13:44:19.307651125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:44:19.650769 sshd[4571]: Accepted publickey for core from 172.24.4.1 port 57724 ssh2: RSA SHA256:T+qL1zJopt6fawD7qVtIgs/s5DTL+bqa4t+0TaT0uww Jan 30 13:44:19.653480 sshd-session[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:19.664733 systemd-logind[1440]: New session 29 of user core. Jan 30 13:44:19.672259 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 13:44:19.998478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70fb854d2f33c2e8702a3477c919ad3d9dccd345f0fe113e38a95551dda2cdfd-rootfs.mount: Deactivated successfully. Jan 30 13:44:20.175012 containerd[1462]: time="2025-01-30T13:44:20.174934312Z" level=info msg="CreateContainer within sandbox \"4f0aac86bcd97037927d1355dadcee7fca28c4da9ef271f18c245c8c188cd8c4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:44:20.215249 containerd[1462]: time="2025-01-30T13:44:20.215174090Z" level=info msg="CreateContainer within sandbox \"4f0aac86bcd97037927d1355dadcee7fca28c4da9ef271f18c245c8c188cd8c4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"55fc7993411fdaa055e392e27323c5b55d70e441b401078ed823bb927dba5caa\"" Jan 30 13:44:20.218797 containerd[1462]: time="2025-01-30T13:44:20.217454230Z" level=info msg="StartContainer for \"55fc7993411fdaa055e392e27323c5b55d70e441b401078ed823bb927dba5caa\"" Jan 30 13:44:20.261196 systemd[1]: Started cri-containerd-55fc7993411fdaa055e392e27323c5b55d70e441b401078ed823bb927dba5caa.scope - libcontainer container 55fc7993411fdaa055e392e27323c5b55d70e441b401078ed823bb927dba5caa. Jan 30 13:44:20.307206 containerd[1462]: time="2025-01-30T13:44:20.307127373Z" level=info msg="StartContainer for \"55fc7993411fdaa055e392e27323c5b55d70e441b401078ed823bb927dba5caa\" returns successfully" Jan 30 13:44:20.708103 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:44:20.763286 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 30 13:44:21.001421 systemd[1]: run-containerd-runc-k8s.io-55fc7993411fdaa055e392e27323c5b55d70e441b401078ed823bb927dba5caa-runc.VLIPq6.mount: Deactivated successfully. Jan 30 13:44:21.215375 kubelet[2645]: I0130 13:44:21.213227 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9ddnn" podStartSLOduration=6.213192938 podStartE2EDuration="6.213192938s" podCreationTimestamp="2025-01-30 13:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:44:21.21299721 +0000 UTC m=+170.843327930" watchObservedRunningTime="2025-01-30 13:44:21.213192938 +0000 UTC m=+170.843523618" Jan 30 13:44:23.964600 systemd-networkd[1367]: lxc_health: Link UP Jan 30 13:44:23.971169 systemd-networkd[1367]: lxc_health: Gained carrier Jan 30 13:44:25.430225 systemd-networkd[1367]: lxc_health: Gained IPv6LL Jan 30 13:44:28.980635 systemd[1]: run-containerd-runc-k8s.io-55fc7993411fdaa055e392e27323c5b55d70e441b401078ed823bb927dba5caa-runc.BnPVXP.mount: Deactivated successfully. Jan 30 13:44:30.485718 containerd[1462]: time="2025-01-30T13:44:30.485594738Z" level=info msg="StopPodSandbox for \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\"" Jan 30 13:44:30.490758 containerd[1462]: time="2025-01-30T13:44:30.485816054Z" level=info msg="TearDown network for sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" successfully" Jan 30 13:44:30.490758 containerd[1462]: time="2025-01-30T13:44:30.485851069Z" level=info msg="StopPodSandbox for \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" returns successfully" Jan 30 13:44:30.490758 containerd[1462]: time="2025-01-30T13:44:30.486640083Z" level=info msg="RemovePodSandbox for \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\"" Jan 30 13:44:30.490758 containerd[1462]: time="2025-01-30T13:44:30.486697993Z" level=info msg="Forcibly stopping sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\"" Jan 30 13:44:30.490758 containerd[1462]: time="2025-01-30T13:44:30.486798802Z" level=info msg="TearDown network for sandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" successfully" Jan 30 13:44:30.497116 containerd[1462]: time="2025-01-30T13:44:30.496712156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:44:30.497116 containerd[1462]: time="2025-01-30T13:44:30.496875724Z" level=info msg="RemovePodSandbox \"89dc533ab2f109ab3c0a60b391837a283685c2c0c25d78ae52a23a84f4de6fa0\" returns successfully" Jan 30 13:44:30.499537 containerd[1462]: time="2025-01-30T13:44:30.499246493Z" level=info msg="StopPodSandbox for \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\"" Jan 30 13:44:30.499537 containerd[1462]: time="2025-01-30T13:44:30.499413767Z" level=info msg="TearDown network for sandbox \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\" successfully" Jan 30 13:44:30.499537 containerd[1462]: time="2025-01-30T13:44:30.499442161Z" level=info msg="StopPodSandbox for \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\" returns successfully" Jan 30 13:44:30.500730 containerd[1462]: time="2025-01-30T13:44:30.500524917Z" level=info msg="RemovePodSandbox for \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\"" Jan 30 13:44:30.500730 containerd[1462]: time="2025-01-30T13:44:30.500565273Z" level=info msg="Forcibly stopping sandbox \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\"" Jan 30 13:44:30.500882 containerd[1462]: time="2025-01-30T13:44:30.500666934Z" level=info msg="TearDown network for sandbox \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\" successfully" Jan 30 13:44:30.506096 containerd[1462]: time="2025-01-30T13:44:30.506009222Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:44:30.506461 containerd[1462]: time="2025-01-30T13:44:30.506105393Z" level=info msg="RemovePodSandbox \"1206edb4b4997d13cd81489bf7c6270ad77c46b494f2f53c180fd328beb45239\" returns successfully" Jan 30 13:44:31.488231 sshd[4687]: Connection closed by 172.24.4.1 port 57724 Jan 30 13:44:31.489254 sshd-session[4571]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:31.495041 systemd[1]: sshd@26-172.24.4.90:22-172.24.4.1:57724.service: Deactivated successfully. Jan 30 13:44:31.498893 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 13:44:31.502873 systemd-logind[1440]: Session 29 logged out. Waiting for processes to exit. Jan 30 13:44:31.505381 systemd-logind[1440]: Removed session 29.