Jan 30 15:38:12.931079 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 15:38:12.931170 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:38:12.931194 kernel: BIOS-provided physical RAM map: Jan 30 15:38:12.931212 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 15:38:12.931228 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 15:38:12.931249 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 15:38:12.931269 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 30 15:38:12.931286 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 30 15:38:12.931303 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 15:38:12.931319 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 15:38:12.931336 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 30 15:38:12.931353 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 15:38:12.931370 kernel: NX (Execute Disable) protection: active Jan 30 15:38:12.931387 kernel: APIC: Static calls initialized Jan 30 15:38:12.931413 kernel: SMBIOS 3.0.0 present. Jan 30 15:38:12.931431 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 30 15:38:12.931449 kernel: Hypervisor detected: KVM Jan 30 15:38:12.931467 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 15:38:12.931484 kernel: kvm-clock: using sched offset of 3370025307 cycles Jan 30 15:38:12.931506 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 15:38:12.931526 kernel: tsc: Detected 1996.249 MHz processor Jan 30 15:38:12.931544 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 15:38:12.931563 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 15:38:12.931582 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 30 15:38:12.931601 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 15:38:12.931619 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 15:38:12.931637 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 30 15:38:12.931656 kernel: ACPI: Early table checksum verification disabled Jan 30 15:38:12.931678 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 30 15:38:12.931696 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:38:12.931715 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:38:12.931733 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:38:12.931751 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 30 15:38:12.931769 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:38:12.931787 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:38:12.931805 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 30 15:38:12.931824 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 30 15:38:12.931846 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 30 15:38:12.931864 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 30 15:38:12.931882 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 30 15:38:12.931907 kernel: No NUMA configuration found Jan 30 15:38:12.931926 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 30 15:38:12.931945 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 30 15:38:12.931968 kernel: Zone ranges: Jan 30 15:38:12.931988 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 15:38:12.932007 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 15:38:12.932026 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 30 15:38:12.932045 kernel: Movable zone start for each node Jan 30 15:38:12.932064 kernel: Early memory node ranges Jan 30 15:38:12.932083 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 15:38:12.932249 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 30 15:38:12.932283 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 30 15:38:12.932301 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 30 15:38:12.932320 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 15:38:12.932338 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 15:38:12.932356 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 30 15:38:12.932374 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 15:38:12.932392 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 15:38:12.932410 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 15:38:12.932428 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 15:38:12.932449 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 15:38:12.932468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 15:38:12.932486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 15:38:12.932504 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 15:38:12.932521 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 15:38:12.932539 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 15:38:12.932557 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 15:38:12.932575 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 30 15:38:12.932593 kernel: Booting paravirtualized kernel on KVM Jan 30 15:38:12.932615 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 15:38:12.932633 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 15:38:12.932651 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 15:38:12.932669 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 15:38:12.932686 kernel: pcpu-alloc: [0] 0 1 Jan 30 15:38:12.932704 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 15:38:12.932725 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:38:12.932745 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 15:38:12.932767 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 15:38:12.932785 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 15:38:12.932803 kernel: Fallback order for Node 0: 0 Jan 30 15:38:12.932821 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 30 15:38:12.932839 kernel: Policy zone: Normal Jan 30 15:38:12.932857 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 15:38:12.932875 kernel: software IO TLB: area num 2. Jan 30 15:38:12.932893 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 227308K reserved, 0K cma-reserved) Jan 30 15:38:12.932919 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 15:38:12.932951 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 15:38:12.932976 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 15:38:12.933004 kernel: Dynamic Preempt: voluntary Jan 30 15:38:12.933033 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 15:38:12.933066 kernel: rcu: RCU event tracing is enabled. Jan 30 15:38:12.933097 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 15:38:12.933152 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 15:38:12.933173 kernel: Rude variant of Tasks RCU enabled. Jan 30 15:38:12.933192 kernel: Tracing variant of Tasks RCU enabled. Jan 30 15:38:12.933211 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 15:38:12.933238 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 15:38:12.933257 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 15:38:12.933276 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 15:38:12.933295 kernel: Console: colour VGA+ 80x25 Jan 30 15:38:12.933314 kernel: printk: console [tty0] enabled Jan 30 15:38:12.933334 kernel: printk: console [ttyS0] enabled Jan 30 15:38:12.933353 kernel: ACPI: Core revision 20230628 Jan 30 15:38:12.933372 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 15:38:12.933391 kernel: x2apic enabled Jan 30 15:38:12.933415 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 15:38:12.933435 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 15:38:12.933454 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 15:38:12.933473 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 30 15:38:12.933493 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 15:38:12.933512 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 15:38:12.933531 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 15:38:12.933551 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 15:38:12.933570 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 15:38:12.933594 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 15:38:12.933613 kernel: Speculative Store Bypass: Vulnerable Jan 30 15:38:12.933632 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 30 15:38:12.933652 kernel: Freeing SMP alternatives memory: 32K Jan 30 15:38:12.933683 kernel: pid_max: default: 32768 minimum: 301 Jan 30 15:38:12.933707 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 15:38:12.933728 kernel: landlock: Up and running. Jan 30 15:38:12.933748 kernel: SELinux: Initializing. Jan 30 15:38:12.933768 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 15:38:12.933789 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 15:38:12.933810 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 30 15:38:12.933835 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:38:12.933856 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:38:12.933877 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:38:12.933897 kernel: Performance Events: AMD PMU driver. Jan 30 15:38:12.933918 kernel: ... version: 0 Jan 30 15:38:12.933942 kernel: ... bit width: 48 Jan 30 15:38:12.933962 kernel: ... generic registers: 4 Jan 30 15:38:12.933983 kernel: ... value mask: 0000ffffffffffff Jan 30 15:38:12.934003 kernel: ... max period: 00007fffffffffff Jan 30 15:38:12.934024 kernel: ... fixed-purpose events: 0 Jan 30 15:38:12.934044 kernel: ... event mask: 000000000000000f Jan 30 15:38:12.934064 kernel: signal: max sigframe size: 1440 Jan 30 15:38:12.934085 kernel: rcu: Hierarchical SRCU implementation. Jan 30 15:38:12.934162 kernel: rcu: Max phase no-delay instances is 400. Jan 30 15:38:12.934190 kernel: smp: Bringing up secondary CPUs ... Jan 30 15:38:12.934211 kernel: smpboot: x86: Booting SMP configuration: Jan 30 15:38:12.934231 kernel: .... node #0, CPUs: #1 Jan 30 15:38:12.934251 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 15:38:12.934272 kernel: smpboot: Max logical packages: 2 Jan 30 15:38:12.934292 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 30 15:38:12.934313 kernel: devtmpfs: initialized Jan 30 15:38:12.934333 kernel: x86/mm: Memory block size: 128MB Jan 30 15:38:12.934353 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 15:38:12.934374 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 15:38:12.934398 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 15:38:12.934419 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 15:38:12.934439 kernel: audit: initializing netlink subsys (disabled) Jan 30 15:38:12.934489 kernel: audit: type=2000 audit(1738251491.900:1): state=initialized audit_enabled=0 res=1 Jan 30 15:38:12.934509 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 15:38:12.934529 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 15:38:12.934550 kernel: cpuidle: using governor menu Jan 30 15:38:12.934570 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 15:38:12.934590 kernel: dca service started, version 1.12.1 Jan 30 15:38:12.934616 kernel: PCI: Using configuration type 1 for base access Jan 30 15:38:12.934636 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 15:38:12.934658 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 15:38:12.934678 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 15:38:12.934698 kernel: ACPI: Added _OSI(Module Device) Jan 30 15:38:12.934719 kernel: ACPI: Added _OSI(Processor Device) Jan 30 15:38:12.934739 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 15:38:12.934759 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 15:38:12.934780 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 15:38:12.934804 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 15:38:12.934825 kernel: ACPI: Interpreter enabled Jan 30 15:38:12.934845 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 15:38:12.934865 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 15:38:12.934886 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 15:38:12.934906 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 15:38:12.934926 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 15:38:12.934947 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 15:38:12.935297 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 15:38:12.935550 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 15:38:12.935765 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 15:38:12.935797 kernel: acpiphp: Slot [3] registered Jan 30 15:38:12.935818 kernel: acpiphp: Slot [4] registered Jan 30 15:38:12.935838 kernel: acpiphp: Slot [5] registered Jan 30 15:38:12.935858 kernel: acpiphp: Slot [6] registered Jan 30 15:38:12.935878 kernel: acpiphp: Slot [7] registered Jan 30 15:38:12.935906 kernel: acpiphp: Slot [8] registered Jan 30 15:38:12.935926 kernel: acpiphp: Slot [9] registered Jan 30 15:38:12.935946 kernel: acpiphp: Slot [10] registered Jan 30 15:38:12.935966 kernel: acpiphp: Slot [11] registered Jan 30 15:38:12.935986 kernel: acpiphp: Slot [12] registered Jan 30 15:38:12.936006 kernel: acpiphp: Slot [13] registered Jan 30 15:38:12.936026 kernel: acpiphp: Slot [14] registered Jan 30 15:38:12.936046 kernel: acpiphp: Slot [15] registered Jan 30 15:38:12.936066 kernel: acpiphp: Slot [16] registered Jan 30 15:38:12.936090 kernel: acpiphp: Slot [17] registered Jan 30 15:38:12.936178 kernel: acpiphp: Slot [18] registered Jan 30 15:38:12.936201 kernel: acpiphp: Slot [19] registered Jan 30 15:38:12.936221 kernel: acpiphp: Slot [20] registered Jan 30 15:38:12.936241 kernel: acpiphp: Slot [21] registered Jan 30 15:38:12.936261 kernel: acpiphp: Slot [22] registered Jan 30 15:38:12.936280 kernel: acpiphp: Slot [23] registered Jan 30 15:38:12.936301 kernel: acpiphp: Slot [24] registered Jan 30 15:38:12.936321 kernel: acpiphp: Slot [25] registered Jan 30 15:38:12.936341 kernel: acpiphp: Slot [26] registered Jan 30 15:38:12.936367 kernel: acpiphp: Slot [27] registered Jan 30 15:38:12.936387 kernel: acpiphp: Slot [28] registered Jan 30 15:38:12.936408 kernel: acpiphp: Slot [29] registered Jan 30 15:38:12.936428 kernel: acpiphp: Slot [30] registered Jan 30 15:38:12.936448 kernel: acpiphp: Slot [31] registered Jan 30 15:38:12.936467 kernel: PCI host bridge to bus 0000:00 Jan 30 15:38:12.936692 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 15:38:12.936887 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 15:38:12.938348 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 15:38:12.938620 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 15:38:12.939160 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 30 15:38:12.939366 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 15:38:12.939616 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 15:38:12.939854 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 15:38:12.940084 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 15:38:12.942368 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 30 15:38:12.942649 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 15:38:12.942880 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 15:38:12.944332 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 15:38:12.944565 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 15:38:12.944801 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 15:38:12.945043 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 15:38:12.946388 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 15:38:12.946685 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 15:38:12.946905 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 15:38:12.948341 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 30 15:38:12.948665 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 30 15:38:12.948976 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 30 15:38:12.950373 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 15:38:12.950713 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 15:38:12.950994 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 30 15:38:12.952372 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 30 15:38:12.952632 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 30 15:38:12.952896 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 30 15:38:12.953249 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 15:38:12.953564 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 15:38:12.953802 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 30 15:38:12.954035 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 30 15:38:12.956375 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 15:38:12.956591 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 30 15:38:12.956795 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 30 15:38:12.957009 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 15:38:12.958095 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 30 15:38:12.958374 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 30 15:38:12.958616 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 30 15:38:12.958650 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 15:38:12.958673 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 15:38:12.958694 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 15:38:12.958715 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 15:38:12.958737 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 15:38:12.958768 kernel: iommu: Default domain type: Translated Jan 30 15:38:12.958789 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 15:38:12.958810 kernel: PCI: Using ACPI for IRQ routing Jan 30 15:38:12.958830 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 15:38:12.958851 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 15:38:12.958871 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 30 15:38:12.959083 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 15:38:12.960344 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 15:38:12.960536 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 15:38:12.960563 kernel: vgaarb: loaded Jan 30 15:38:12.960581 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 15:38:12.960598 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 15:38:12.960616 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 15:38:12.960633 kernel: pnp: PnP ACPI init Jan 30 15:38:12.960805 kernel: pnp 00:03: [dma 2] Jan 30 15:38:12.960835 kernel: pnp: PnP ACPI: found 5 devices Jan 30 15:38:12.960853 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 15:38:12.960876 kernel: NET: Registered PF_INET protocol family Jan 30 15:38:12.960892 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 15:38:12.960908 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 15:38:12.960925 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 15:38:12.960941 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 15:38:12.960957 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 15:38:12.960973 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 15:38:12.960989 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 15:38:12.961005 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 15:38:12.961024 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 15:38:12.961040 kernel: NET: Registered PF_XDP protocol family Jan 30 15:38:12.961244 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 15:38:12.961392 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 15:38:12.961535 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 15:38:12.961679 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 30 15:38:12.961822 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 30 15:38:12.961990 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 15:38:12.965182 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 15:38:12.965200 kernel: PCI: CLS 0 bytes, default 64 Jan 30 15:38:12.965209 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 15:38:12.965219 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 30 15:38:12.965228 kernel: Initialise system trusted keyrings Jan 30 15:38:12.965237 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 15:38:12.965246 kernel: Key type asymmetric registered Jan 30 15:38:12.965255 kernel: Asymmetric key parser 'x509' registered Jan 30 15:38:12.965268 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 15:38:12.965277 kernel: io scheduler mq-deadline registered Jan 30 15:38:12.965285 kernel: io scheduler kyber registered Jan 30 15:38:12.965294 kernel: io scheduler bfq registered Jan 30 15:38:12.965303 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 15:38:12.965312 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 15:38:12.965321 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 15:38:12.965330 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 15:38:12.965340 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 15:38:12.965351 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 15:38:12.965360 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 15:38:12.965369 kernel: random: crng init done Jan 30 15:38:12.965377 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 15:38:12.965386 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 15:38:12.965396 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 15:38:12.965494 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 15:38:12.965509 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 15:38:12.965590 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 15:38:12.965678 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T15:38:12 UTC (1738251492) Jan 30 15:38:12.965761 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 30 15:38:12.965774 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 15:38:12.965783 kernel: NET: Registered PF_INET6 protocol family Jan 30 15:38:12.965792 kernel: Segment Routing with IPv6 Jan 30 15:38:12.965801 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 15:38:12.965810 kernel: NET: Registered PF_PACKET protocol family Jan 30 15:38:12.965818 kernel: Key type dns_resolver registered Jan 30 15:38:12.965830 kernel: IPI shorthand broadcast: enabled Jan 30 15:38:12.965839 kernel: sched_clock: Marking stable (982007342, 170721626)->(1198978093, -46249125) Jan 30 15:38:12.965848 kernel: registered taskstats version 1 Jan 30 15:38:12.965857 kernel: Loading compiled-in X.509 certificates Jan 30 15:38:12.965866 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 15:38:12.965876 kernel: Key type .fscrypt registered Jan 30 15:38:12.965884 kernel: Key type fscrypt-provisioning registered Jan 30 15:38:12.965893 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 15:38:12.965902 kernel: ima: Allocated hash algorithm: sha1 Jan 30 15:38:12.965912 kernel: ima: No architecture policies found Jan 30 15:38:12.965921 kernel: clk: Disabling unused clocks Jan 30 15:38:12.965929 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 15:38:12.965938 kernel: Write protecting the kernel read-only data: 36864k Jan 30 15:38:12.965947 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 15:38:12.965956 kernel: Run /init as init process Jan 30 15:38:12.965964 kernel: with arguments: Jan 30 15:38:12.965973 kernel: /init Jan 30 15:38:12.965981 kernel: with environment: Jan 30 15:38:12.965991 kernel: HOME=/ Jan 30 15:38:12.966000 kernel: TERM=linux Jan 30 15:38:12.966008 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 15:38:12.966019 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:38:12.966031 systemd[1]: Detected virtualization kvm. Jan 30 15:38:12.966041 systemd[1]: Detected architecture x86-64. Jan 30 15:38:12.966050 systemd[1]: Running in initrd. Jan 30 15:38:12.966061 systemd[1]: No hostname configured, using default hostname. Jan 30 15:38:12.966070 systemd[1]: Hostname set to . Jan 30 15:38:12.966080 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:38:12.966089 systemd[1]: Queued start job for default target initrd.target. Jan 30 15:38:12.967132 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:38:12.967148 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:38:12.967160 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 15:38:12.967181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:38:12.967193 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 15:38:12.967204 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 15:38:12.967216 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 15:38:12.967227 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 15:38:12.967238 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:38:12.967250 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:38:12.967261 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:38:12.967271 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:38:12.967282 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:38:12.967292 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:38:12.967302 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:38:12.967312 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:38:12.967323 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 15:38:12.967336 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 15:38:12.967346 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:38:12.967356 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:38:12.967367 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:38:12.967377 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:38:12.967387 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 15:38:12.967398 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:38:12.967408 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 15:38:12.967418 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 15:38:12.967431 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:38:12.967441 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:38:12.967451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:38:12.967462 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 15:38:12.967494 systemd-journald[184]: Collecting audit messages is disabled. Jan 30 15:38:12.967523 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:38:12.967534 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 15:38:12.967549 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:38:12.967561 systemd-journald[184]: Journal started Jan 30 15:38:12.967584 systemd-journald[184]: Runtime Journal (/run/log/journal/2cd1b58b7b69413ea4b7c6c3ef27b0be) is 8.0M, max 78.3M, 70.3M free. Jan 30 15:38:12.941740 systemd-modules-load[185]: Inserted module 'overlay' Jan 30 15:38:13.015660 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:38:13.015690 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 15:38:13.015703 kernel: Bridge firewalling registered Jan 30 15:38:12.984982 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 30 15:38:13.016628 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:38:13.017410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:38:13.018646 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:38:13.028335 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:38:13.030949 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:38:13.036339 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:38:13.047349 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:38:13.060209 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:38:13.063854 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:38:13.065618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:38:13.070291 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 15:38:13.071071 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:38:13.076256 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:38:13.092308 dracut-cmdline[216]: dracut-dracut-053 Jan 30 15:38:13.097642 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:38:13.122235 systemd-resolved[218]: Positive Trust Anchors: Jan 30 15:38:13.122255 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:38:13.122298 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:38:13.125299 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 30 15:38:13.126262 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:38:13.132087 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:38:13.196134 kernel: SCSI subsystem initialized Jan 30 15:38:13.207218 kernel: Loading iSCSI transport class v2.0-870. Jan 30 15:38:13.220157 kernel: iscsi: registered transport (tcp) Jan 30 15:38:13.244354 kernel: iscsi: registered transport (qla4xxx) Jan 30 15:38:13.244476 kernel: QLogic iSCSI HBA Driver Jan 30 15:38:13.315665 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 15:38:13.323352 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 15:38:13.399302 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 15:38:13.399426 kernel: device-mapper: uevent: version 1.0.3 Jan 30 15:38:13.402993 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 15:38:13.470247 kernel: raid6: sse2x4 gen() 4961 MB/s Jan 30 15:38:13.489215 kernel: raid6: sse2x2 gen() 6335 MB/s Jan 30 15:38:13.508062 kernel: raid6: sse2x1 gen() 8636 MB/s Jan 30 15:38:13.508156 kernel: raid6: using algorithm sse2x1 gen() 8636 MB/s Jan 30 15:38:13.527140 kernel: raid6: .... xor() 6771 MB/s, rmw enabled Jan 30 15:38:13.527201 kernel: raid6: using ssse3x2 recovery algorithm Jan 30 15:38:13.550184 kernel: xor: measuring software checksum speed Jan 30 15:38:13.550245 kernel: prefetch64-sse : 18248 MB/sec Jan 30 15:38:13.552652 kernel: generic_sse : 13395 MB/sec Jan 30 15:38:13.552712 kernel: xor: using function: prefetch64-sse (18248 MB/sec) Jan 30 15:38:13.741166 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 15:38:13.754087 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:38:13.760265 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:38:13.781228 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 30 15:38:13.785637 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:38:13.793318 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 15:38:13.811934 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 30 15:38:13.842844 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:38:13.850277 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:38:13.899022 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:38:13.909432 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 15:38:13.942791 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 15:38:13.955262 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:38:13.956793 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:38:13.959825 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:38:13.967242 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 15:38:13.986403 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:38:14.008918 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 30 15:38:14.050325 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 30 15:38:14.050468 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 15:38:14.050483 kernel: GPT:17805311 != 20971519 Jan 30 15:38:14.050495 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 15:38:14.050506 kernel: GPT:17805311 != 20971519 Jan 30 15:38:14.050525 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 15:38:14.050537 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:38:14.050549 kernel: libata version 3.00 loaded. Jan 30 15:38:14.050560 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 15:38:14.050700 kernel: scsi host0: ata_piix Jan 30 15:38:14.050821 kernel: scsi host1: ata_piix Jan 30 15:38:14.050934 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 30 15:38:14.050948 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 30 15:38:14.035790 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:38:14.035920 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:38:14.036989 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:38:14.038030 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:38:14.038982 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:38:14.047060 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:38:14.054434 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:38:14.108973 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:38:14.116406 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:38:14.142860 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:38:14.246891 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (445) Jan 30 15:38:14.266169 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (461) Jan 30 15:38:14.284604 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 15:38:14.292337 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 15:38:14.296835 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 15:38:14.297426 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 15:38:14.303716 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 15:38:14.317260 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 15:38:14.334629 disk-uuid[510]: Primary Header is updated. Jan 30 15:38:14.334629 disk-uuid[510]: Secondary Entries is updated. Jan 30 15:38:14.334629 disk-uuid[510]: Secondary Header is updated. Jan 30 15:38:14.345164 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:38:14.355124 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:38:15.371186 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:38:15.371922 disk-uuid[511]: The operation has completed successfully. Jan 30 15:38:15.438271 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 15:38:15.438582 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 15:38:15.475268 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 15:38:15.482039 sh[524]: Success Jan 30 15:38:15.500160 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 30 15:38:15.565976 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 15:38:15.573264 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 15:38:15.579929 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 15:38:15.623193 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 15:38:15.623293 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:38:15.627955 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 15:38:15.632983 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 15:38:15.636791 kernel: BTRFS info (device dm-0): using free space tree Jan 30 15:38:15.658772 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 15:38:15.659831 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 15:38:15.669407 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 15:38:15.674306 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 15:38:15.698178 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:38:15.704165 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:38:15.704197 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:38:15.716258 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:38:15.727496 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 15:38:15.731674 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:38:15.747502 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 15:38:15.752418 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 15:38:15.800272 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:38:15.807260 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:38:15.845003 systemd-networkd[707]: lo: Link UP Jan 30 15:38:15.845014 systemd-networkd[707]: lo: Gained carrier Jan 30 15:38:15.846182 systemd-networkd[707]: Enumeration completed Jan 30 15:38:15.846257 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:38:15.847241 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:38:15.847245 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:38:15.848318 systemd[1]: Reached target network.target - Network. Jan 30 15:38:15.849033 systemd-networkd[707]: eth0: Link UP Jan 30 15:38:15.849036 systemd-networkd[707]: eth0: Gained carrier Jan 30 15:38:15.849043 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:38:15.860261 systemd-networkd[707]: eth0: DHCPv4 address 172.24.4.191/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 15:38:15.907542 ignition[641]: Ignition 2.19.0 Jan 30 15:38:15.907554 ignition[641]: Stage: fetch-offline Jan 30 15:38:15.907600 ignition[641]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:38:15.907612 ignition[641]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:38:15.910341 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:38:15.907707 ignition[641]: parsed url from cmdline: "" Jan 30 15:38:15.910893 systemd-resolved[218]: Detected conflict on linux IN A 172.24.4.191 Jan 30 15:38:15.907712 ignition[641]: no config URL provided Jan 30 15:38:15.910902 systemd-resolved[218]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Jan 30 15:38:15.907717 ignition[641]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:38:15.916339 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 15:38:15.907726 ignition[641]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:38:15.907731 ignition[641]: failed to fetch config: resource requires networking Jan 30 15:38:15.907927 ignition[641]: Ignition finished successfully Jan 30 15:38:15.934503 ignition[716]: Ignition 2.19.0 Jan 30 15:38:15.934518 ignition[716]: Stage: fetch Jan 30 15:38:15.934691 ignition[716]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:38:15.934704 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:38:15.934801 ignition[716]: parsed url from cmdline: "" Jan 30 15:38:15.934805 ignition[716]: no config URL provided Jan 30 15:38:15.934810 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:38:15.934819 ignition[716]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:38:15.935014 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 15:38:15.935027 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 15:38:15.935034 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 15:38:16.119065 ignition[716]: GET result: OK Jan 30 15:38:16.119508 ignition[716]: parsing config with SHA512: 10e90c8c9919f0cef121a5109bad1970fb6058d0e41cb228a1337e777da73bf18acdbd3559749c314d8899b19facf218e8c1569db43aad2a277bddee9e05d5bb Jan 30 15:38:16.132462 unknown[716]: fetched base config from "system" Jan 30 15:38:16.132539 unknown[716]: fetched base config from "system" Jan 30 15:38:16.133765 ignition[716]: fetch: fetch complete Jan 30 15:38:16.132623 unknown[716]: fetched user config from "openstack" Jan 30 15:38:16.133779 ignition[716]: fetch: fetch passed Jan 30 15:38:16.137357 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 15:38:16.133872 ignition[716]: Ignition finished successfully Jan 30 15:38:16.148466 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 15:38:16.188463 ignition[722]: Ignition 2.19.0 Jan 30 15:38:16.188481 ignition[722]: Stage: kargs Jan 30 15:38:16.188885 ignition[722]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:38:16.188914 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:38:16.193602 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 15:38:16.191251 ignition[722]: kargs: kargs passed Jan 30 15:38:16.191359 ignition[722]: Ignition finished successfully Jan 30 15:38:16.202490 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 15:38:16.232663 ignition[729]: Ignition 2.19.0 Jan 30 15:38:16.232691 ignition[729]: Stage: disks Jan 30 15:38:16.233153 ignition[729]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:38:16.233183 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:38:16.235460 ignition[729]: disks: disks passed Jan 30 15:38:16.235583 ignition[729]: Ignition finished successfully Jan 30 15:38:16.237649 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 15:38:16.238461 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 15:38:16.240238 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 15:38:16.242508 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:38:16.244713 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:38:16.246602 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:38:16.255465 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 15:38:16.278676 systemd-fsck[737]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 15:38:16.299026 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 15:38:16.308387 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 15:38:16.475867 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 15:38:16.477295 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 15:38:16.478283 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 15:38:16.486320 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:38:16.490733 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 15:38:16.492559 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 15:38:16.494879 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 15:38:16.498618 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 15:38:16.498658 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:38:16.502581 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 15:38:16.510225 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (745) Jan 30 15:38:16.531313 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:38:16.531375 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:38:16.531405 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:38:16.527330 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 15:38:16.543278 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:38:16.552820 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:38:16.643815 initrd-setup-root[771]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 15:38:16.653614 initrd-setup-root[780]: cut: /sysroot/etc/group: No such file or directory Jan 30 15:38:16.663326 initrd-setup-root[787]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 15:38:16.668600 initrd-setup-root[795]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 15:38:16.782377 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 15:38:16.791245 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 15:38:16.795647 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 15:38:16.806586 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:38:16.806519 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 15:38:16.836645 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 15:38:16.840414 ignition[862]: INFO : Ignition 2.19.0 Jan 30 15:38:16.842225 ignition[862]: INFO : Stage: mount Jan 30 15:38:16.842225 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:38:16.842225 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:38:16.842225 ignition[862]: INFO : mount: mount passed Jan 30 15:38:16.842225 ignition[862]: INFO : Ignition finished successfully Jan 30 15:38:16.843300 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 15:38:17.463421 systemd-networkd[707]: eth0: Gained IPv6LL Jan 30 15:38:23.750146 coreos-metadata[747]: Jan 30 15:38:23.749 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:38:23.795434 coreos-metadata[747]: Jan 30 15:38:23.795 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 15:38:23.811029 coreos-metadata[747]: Jan 30 15:38:23.810 INFO Fetch successful Jan 30 15:38:23.812702 coreos-metadata[747]: Jan 30 15:38:23.811 INFO wrote hostname ci-4081-3-0-e-11fb05fa14.novalocal to /sysroot/etc/hostname Jan 30 15:38:23.816384 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 15:38:23.816623 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 15:38:23.827359 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 15:38:23.867651 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:38:23.885254 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (880) Jan 30 15:38:23.893301 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:38:23.893398 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:38:23.897463 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:38:23.911239 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:38:23.916027 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:38:23.956172 ignition[898]: INFO : Ignition 2.19.0 Jan 30 15:38:23.956172 ignition[898]: INFO : Stage: files Jan 30 15:38:23.956172 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:38:23.956172 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:38:23.961912 ignition[898]: DEBUG : files: compiled without relabeling support, skipping Jan 30 15:38:23.964794 ignition[898]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 15:38:23.964794 ignition[898]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 15:38:23.972526 ignition[898]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 15:38:23.973645 ignition[898]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 15:38:23.974874 unknown[898]: wrote ssh authorized keys file for user: core Jan 30 15:38:23.975724 ignition[898]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 15:38:23.977933 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 15:38:23.979840 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 15:38:24.043524 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 15:38:24.792472 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 15:38:24.792472 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 15:38:24.797941 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 15:38:24.797941 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:38:24.797941 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:38:24.797941 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:38:24.797941 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:38:24.797941 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:38:24.797941 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:38:24.797941 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:38:24.797941 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:38:24.797941 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 15:38:24.797941 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 15:38:24.797941 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 15:38:24.797941 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 15:38:25.359181 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 15:38:26.898662 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 15:38:26.898662 ignition[898]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 15:38:26.903384 ignition[898]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:38:26.903384 ignition[898]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:38:26.903384 ignition[898]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 15:38:26.903384 ignition[898]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 15:38:26.903384 ignition[898]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 15:38:26.903384 ignition[898]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:38:26.903384 ignition[898]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:38:26.903384 ignition[898]: INFO : files: files passed Jan 30 15:38:26.903384 ignition[898]: INFO : Ignition finished successfully Jan 30 15:38:26.904401 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 15:38:26.913374 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 15:38:26.917372 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 15:38:26.919702 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 15:38:26.919817 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 15:38:26.933024 initrd-setup-root-after-ignition[926]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:38:26.933024 initrd-setup-root-after-ignition[926]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:38:26.938278 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:38:26.937176 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:38:26.939382 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 15:38:26.946315 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 15:38:26.984916 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 15:38:26.987456 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 15:38:26.990713 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 15:38:26.992684 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 15:38:26.995254 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 15:38:27.002354 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 15:38:27.016935 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:38:27.023358 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 15:38:27.037778 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:38:27.039032 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:38:27.040791 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 15:38:27.042318 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 15:38:27.042609 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:38:27.044575 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 15:38:27.046357 systemd[1]: Stopped target basic.target - Basic System. Jan 30 15:38:27.047779 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 15:38:27.049254 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:38:27.050811 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 15:38:27.052168 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 15:38:27.053340 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:38:27.054542 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 15:38:27.055842 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 15:38:27.056952 systemd[1]: Stopped target swap.target - Swaps. Jan 30 15:38:27.057923 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 15:38:27.058038 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:38:27.059323 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:38:27.060079 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:38:27.061233 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 15:38:27.063211 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:38:27.063928 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 15:38:27.064079 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 15:38:27.065520 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 15:38:27.065642 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:38:27.066393 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 15:38:27.066591 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 15:38:27.073346 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 15:38:27.078150 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 15:38:27.079595 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 15:38:27.079769 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:38:27.080864 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 15:38:27.080987 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:38:27.088425 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 15:38:27.089088 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 15:38:27.091266 ignition[950]: INFO : Ignition 2.19.0 Jan 30 15:38:27.091266 ignition[950]: INFO : Stage: umount Jan 30 15:38:27.093370 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:38:27.093370 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:38:27.095785 ignition[950]: INFO : umount: umount passed Jan 30 15:38:27.095785 ignition[950]: INFO : Ignition finished successfully Jan 30 15:38:27.096907 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 15:38:27.097023 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 15:38:27.100344 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 15:38:27.100426 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 15:38:27.101014 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 15:38:27.101055 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 15:38:27.103799 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 15:38:27.103851 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 15:38:27.104970 systemd[1]: Stopped target network.target - Network. Jan 30 15:38:27.107437 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 15:38:27.107486 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:38:27.109515 systemd[1]: Stopped target paths.target - Path Units. Jan 30 15:38:27.109958 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 15:38:27.114185 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:38:27.115736 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 15:38:27.116816 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 15:38:27.118024 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 15:38:27.118065 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:38:27.119136 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 15:38:27.119171 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:38:27.120318 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 15:38:27.120368 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 15:38:27.121526 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 15:38:27.121571 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 15:38:27.123031 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 15:38:27.125372 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 15:38:27.127068 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 15:38:27.129145 systemd-networkd[707]: eth0: DHCPv6 lease lost Jan 30 15:38:27.130753 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 15:38:27.130868 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 15:38:27.131667 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 15:38:27.131700 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:38:27.140283 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 15:38:27.140805 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 15:38:27.140859 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:38:27.141571 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:38:27.144006 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 15:38:27.144143 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 15:38:27.148754 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:38:27.148826 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:38:27.154140 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 15:38:27.154195 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 15:38:27.154906 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 15:38:27.154950 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:38:27.158546 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 15:38:27.158704 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:38:27.160883 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 15:38:27.160974 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 15:38:27.162198 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 15:38:27.162251 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 15:38:27.163169 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 15:38:27.163204 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:38:27.164229 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 15:38:27.164274 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:38:27.165803 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 15:38:27.165844 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 15:38:27.167007 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:38:27.167048 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:38:27.177283 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 15:38:27.178520 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 15:38:27.178573 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:38:27.180685 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 15:38:27.180730 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:38:27.181306 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 15:38:27.181346 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:38:27.182652 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:38:27.182692 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:38:27.184328 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 15:38:27.184430 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 15:38:27.368632 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 15:38:27.368887 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 15:38:27.372769 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 15:38:27.375397 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 15:38:27.375593 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 15:38:27.385452 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 15:38:27.414275 systemd[1]: Switching root. Jan 30 15:38:27.464277 systemd-journald[184]: Journal stopped Jan 30 15:38:28.974561 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 30 15:38:28.974627 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 15:38:28.974649 kernel: SELinux: policy capability open_perms=1 Jan 30 15:38:28.974661 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 15:38:28.974674 kernel: SELinux: policy capability always_check_network=0 Jan 30 15:38:28.974686 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 15:38:28.974702 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 15:38:28.974713 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 15:38:28.974725 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 15:38:28.974737 kernel: audit: type=1403 audit(1738251507.844:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 15:38:28.974754 systemd[1]: Successfully loaded SELinux policy in 83.584ms. Jan 30 15:38:28.974775 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.469ms. Jan 30 15:38:28.974789 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:38:28.974802 systemd[1]: Detected virtualization kvm. Jan 30 15:38:28.974817 systemd[1]: Detected architecture x86-64. Jan 30 15:38:28.974830 systemd[1]: Detected first boot. Jan 30 15:38:28.974843 systemd[1]: Hostname set to . Jan 30 15:38:28.974856 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:38:28.974869 zram_generator::config[993]: No configuration found. Jan 30 15:38:28.974886 systemd[1]: Populated /etc with preset unit settings. Jan 30 15:38:28.974899 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 15:38:28.974912 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 15:38:28.974926 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 15:38:28.974940 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 15:38:28.974955 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 15:38:28.974967 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 15:38:28.974979 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 15:38:28.974991 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 15:38:28.975003 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 15:38:28.975015 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 15:38:28.975028 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 15:38:28.975042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:38:28.975055 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:38:28.975066 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 15:38:28.975078 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 15:38:28.975090 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 15:38:28.975185 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:38:28.975200 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 15:38:28.975212 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:38:28.975223 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 15:38:28.975239 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 15:38:28.975251 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 15:38:28.975263 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 15:38:28.975274 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:38:28.975286 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:38:28.975299 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:38:28.975313 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:38:28.975324 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 15:38:28.975337 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 15:38:28.975349 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:38:28.975361 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:38:28.975374 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:38:28.975386 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 15:38:28.975402 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 15:38:28.975413 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 15:38:28.975427 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 15:38:28.975439 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:38:28.975451 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 15:38:28.975463 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 15:38:28.975475 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 15:38:28.975487 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 15:38:28.975499 systemd[1]: Reached target machines.target - Containers. Jan 30 15:38:28.975510 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 15:38:28.975522 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:38:28.975536 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:38:28.975548 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 15:38:28.975561 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:38:28.975572 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:38:28.975585 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:38:28.975596 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 15:38:28.975608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:38:28.975620 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 15:38:28.975635 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 15:38:28.975646 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 15:38:28.975658 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 15:38:28.975670 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 15:38:28.975681 kernel: fuse: init (API version 7.39) Jan 30 15:38:28.975692 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:38:28.975704 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:38:28.975716 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 15:38:28.975727 kernel: loop: module loaded Jan 30 15:38:28.975740 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 15:38:28.975752 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:38:28.975764 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 15:38:28.975775 systemd[1]: Stopped verity-setup.service. Jan 30 15:38:28.975787 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:38:28.975799 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 15:38:28.975826 systemd-journald[1089]: Collecting audit messages is disabled. Jan 30 15:38:28.975850 kernel: ACPI: bus type drm_connector registered Jan 30 15:38:28.975865 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 15:38:28.975877 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 15:38:28.975889 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 15:38:28.975902 systemd-journald[1089]: Journal started Jan 30 15:38:28.975929 systemd-journald[1089]: Runtime Journal (/run/log/journal/2cd1b58b7b69413ea4b7c6c3ef27b0be) is 8.0M, max 78.3M, 70.3M free. Jan 30 15:38:28.597284 systemd[1]: Queued start job for default target multi-user.target. Jan 30 15:38:28.619685 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 15:38:28.620044 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 15:38:28.978532 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:38:28.980706 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 15:38:28.981307 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 15:38:28.982038 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 15:38:28.982846 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:38:28.983681 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 15:38:28.983856 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 15:38:28.984640 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:38:28.984812 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:38:28.985854 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:38:28.986023 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:38:28.986763 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:38:28.986938 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:38:28.987834 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 15:38:28.988002 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 15:38:28.988722 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:38:28.988886 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:38:28.989764 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:38:28.990676 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 15:38:28.991529 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 15:38:29.001815 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 15:38:29.009288 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 15:38:29.016779 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 15:38:29.019179 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 15:38:29.019222 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:38:29.020909 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 15:38:29.031266 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 15:38:29.036389 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 15:38:29.037244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:38:29.042244 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 15:38:29.048259 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 15:38:29.048860 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:38:29.051390 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 15:38:29.052853 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:38:29.054708 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:38:29.061349 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 15:38:29.065785 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:38:29.070213 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:38:29.072014 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 15:38:29.074321 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 15:38:29.075169 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 15:38:29.088675 kernel: loop0: detected capacity change from 0 to 8 Jan 30 15:38:29.088394 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 15:38:29.093230 systemd-journald[1089]: Time spent on flushing to /var/log/journal/2cd1b58b7b69413ea4b7c6c3ef27b0be is 71.824ms for 951 entries. Jan 30 15:38:29.093230 systemd-journald[1089]: System Journal (/var/log/journal/2cd1b58b7b69413ea4b7c6c3ef27b0be) is 8.0M, max 584.8M, 576.8M free. Jan 30 15:38:29.221709 systemd-journald[1089]: Received client request to flush runtime journal. Jan 30 15:38:29.221767 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 15:38:29.221791 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 15:38:29.107626 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 15:38:29.109392 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 15:38:29.116349 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 15:38:29.125264 udevadm[1132]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 15:38:29.165235 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:38:29.225775 systemd-tmpfiles[1127]: ACLs are not supported, ignoring. Jan 30 15:38:29.225795 systemd-tmpfiles[1127]: ACLs are not supported, ignoring. Jan 30 15:38:29.227207 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 15:38:29.235042 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 15:38:29.235975 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:38:29.238305 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 15:38:29.249486 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 15:38:29.281634 kernel: loop2: detected capacity change from 0 to 205544 Jan 30 15:38:29.310027 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 15:38:29.317633 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:38:29.349534 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 15:38:29.351640 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Jan 30 15:38:29.351663 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Jan 30 15:38:29.359225 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:38:29.424446 kernel: loop4: detected capacity change from 0 to 8 Jan 30 15:38:29.427186 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 15:38:29.466201 kernel: loop6: detected capacity change from 0 to 205544 Jan 30 15:38:29.524995 kernel: loop7: detected capacity change from 0 to 142488 Jan 30 15:38:29.575780 (sd-merge)[1155]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 15:38:29.576472 (sd-merge)[1155]: Merged extensions into '/usr'. Jan 30 15:38:29.583305 systemd[1]: Reloading requested from client PID 1126 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 15:38:29.583324 systemd[1]: Reloading... Jan 30 15:38:29.690132 zram_generator::config[1178]: No configuration found. Jan 30 15:38:29.929201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:38:29.991680 systemd[1]: Reloading finished in 407 ms. Jan 30 15:38:30.013667 ldconfig[1121]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 15:38:30.024243 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 15:38:30.025120 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 15:38:30.025931 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 15:38:30.035267 systemd[1]: Starting ensure-sysext.service... Jan 30 15:38:30.036831 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:38:30.039462 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:38:30.051170 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Jan 30 15:38:30.051184 systemd[1]: Reloading... Jan 30 15:38:30.083149 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 15:38:30.083520 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 15:38:30.084923 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Jan 30 15:38:30.085482 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 15:38:30.085830 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 30 15:38:30.085913 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 30 15:38:30.090795 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:38:30.090807 systemd-tmpfiles[1240]: Skipping /boot Jan 30 15:38:30.105963 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:38:30.105976 systemd-tmpfiles[1240]: Skipping /boot Jan 30 15:38:30.132150 zram_generator::config[1265]: No configuration found. Jan 30 15:38:30.246580 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1287) Jan 30 15:38:30.337172 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 15:38:30.352649 kernel: ACPI: button: Power Button [PWRF] Jan 30 15:38:30.352727 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 15:38:30.368240 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 30 15:38:30.393080 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:38:30.422131 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 15:38:30.441557 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 15:38:30.441635 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 15:38:30.447686 kernel: Console: switching to colour dummy device 80x25 Jan 30 15:38:30.447731 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 15:38:30.447749 kernel: [drm] features: -context_init Jan 30 15:38:30.449546 kernel: [drm] number of scanouts: 1 Jan 30 15:38:30.449580 kernel: [drm] number of cap sets: 0 Jan 30 15:38:30.452118 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 15:38:30.464726 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 15:38:30.464832 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 15:38:30.469124 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 15:38:30.479976 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 15:38:30.480353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 15:38:30.483040 systemd[1]: Reloading finished in 431 ms. Jan 30 15:38:30.493976 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:38:30.505645 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:38:30.533968 systemd[1]: Finished ensure-sysext.service. Jan 30 15:38:30.535689 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 15:38:30.544162 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:38:30.551239 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 15:38:30.566451 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 15:38:30.566870 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:38:30.572799 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 15:38:30.585421 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:38:30.589635 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:38:30.604800 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:38:30.608724 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:38:30.608942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:38:30.612071 lvm[1365]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:38:30.617531 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 15:38:30.625318 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 15:38:30.628899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:38:30.632831 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:38:30.641348 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 15:38:30.644438 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 15:38:30.648279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:38:30.648368 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:38:30.649142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:38:30.649287 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:38:30.649596 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:38:30.649715 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:38:30.649977 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:38:30.650088 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:38:30.651913 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:38:30.652038 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:38:30.657090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:38:30.660249 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:38:30.661490 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 15:38:30.667714 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:38:30.678293 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 15:38:30.685000 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 15:38:30.701069 lvm[1390]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:38:30.702489 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 15:38:30.720133 augenrules[1395]: No rules Jan 30 15:38:30.723203 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 15:38:30.732431 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 15:38:30.735898 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 15:38:30.748506 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 15:38:30.760319 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 15:38:30.776790 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 15:38:30.785546 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 15:38:30.812712 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 15:38:30.816375 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 15:38:30.830340 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:38:30.872041 systemd-networkd[1377]: lo: Link UP Jan 30 15:38:30.872052 systemd-networkd[1377]: lo: Gained carrier Jan 30 15:38:30.873263 systemd-networkd[1377]: Enumeration completed Jan 30 15:38:30.873598 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:38:30.876561 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:38:30.876576 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:38:30.877348 systemd-networkd[1377]: eth0: Link UP Jan 30 15:38:30.877352 systemd-networkd[1377]: eth0: Gained carrier Jan 30 15:38:30.877366 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:38:30.883802 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 15:38:30.896197 systemd-networkd[1377]: eth0: DHCPv4 address 172.24.4.191/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 15:38:30.904184 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 15:38:30.904901 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 15:38:30.908712 systemd-resolved[1378]: Positive Trust Anchors: Jan 30 15:38:30.909072 systemd-resolved[1378]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:38:30.909195 systemd-resolved[1378]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:38:30.914026 systemd-resolved[1378]: Using system hostname 'ci-4081-3-0-e-11fb05fa14.novalocal'. Jan 30 15:38:30.915697 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:38:30.916299 systemd[1]: Reached target network.target - Network. Jan 30 15:38:30.916735 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:38:30.917187 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:38:30.917701 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 15:38:30.920214 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 15:38:30.921656 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 15:38:30.922222 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 15:38:30.922864 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 15:38:30.925640 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 15:38:30.925700 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:38:30.928071 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:38:30.928591 systemd-timesyncd[1380]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Jan 30 15:38:30.928643 systemd-timesyncd[1380]: Initial clock synchronization to Thu 2025-01-30 15:38:31.239154 UTC. Jan 30 15:38:30.931024 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 15:38:30.936868 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 15:38:30.944573 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 15:38:30.945935 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 15:38:30.949241 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:38:30.951503 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:38:30.953768 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:38:30.953821 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:38:30.960211 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 15:38:30.964689 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 15:38:30.970277 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 15:38:30.974821 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 15:38:30.986307 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 15:38:30.987082 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 15:38:30.990071 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 15:38:31.004325 jq[1427]: false Jan 30 15:38:30.999231 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 15:38:31.013336 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 15:38:31.018449 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 15:38:31.030243 extend-filesystems[1428]: Found loop4 Jan 30 15:38:31.030243 extend-filesystems[1428]: Found loop5 Jan 30 15:38:31.030243 extend-filesystems[1428]: Found loop6 Jan 30 15:38:31.030243 extend-filesystems[1428]: Found loop7 Jan 30 15:38:31.030243 extend-filesystems[1428]: Found vda Jan 30 15:38:31.030243 extend-filesystems[1428]: Found vda1 Jan 30 15:38:31.030243 extend-filesystems[1428]: Found vda2 Jan 30 15:38:31.030243 extend-filesystems[1428]: Found vda3 Jan 30 15:38:31.030243 extend-filesystems[1428]: Found usr Jan 30 15:38:31.030243 extend-filesystems[1428]: Found vda4 Jan 30 15:38:31.030243 extend-filesystems[1428]: Found vda6 Jan 30 15:38:31.030243 extend-filesystems[1428]: Found vda7 Jan 30 15:38:31.030243 extend-filesystems[1428]: Found vda9 Jan 30 15:38:31.030243 extend-filesystems[1428]: Checking size of /dev/vda9 Jan 30 15:38:31.032406 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 15:38:31.039955 dbus-daemon[1424]: [system] SELinux support is enabled Jan 30 15:38:31.034877 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 15:38:31.038981 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 15:38:31.050283 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 15:38:31.060262 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 15:38:31.062929 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 15:38:31.074075 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 15:38:31.074595 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 15:38:31.074915 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 15:38:31.075060 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 15:38:31.075498 jq[1443]: true Jan 30 15:38:31.080762 extend-filesystems[1428]: Resized partition /dev/vda9 Jan 30 15:38:31.085752 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 15:38:31.085937 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 15:38:31.104980 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Jan 30 15:38:31.115497 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 15:38:31.115527 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 15:38:31.124909 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 30 15:38:31.125017 jq[1450]: true Jan 30 15:38:31.117915 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 15:38:31.117954 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 15:38:31.141066 update_engine[1440]: I20250130 15:38:31.139688 1440 main.cc:92] Flatcar Update Engine starting Jan 30 15:38:31.146872 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 30 15:38:31.189267 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1285) Jan 30 15:38:31.189355 tar[1449]: linux-amd64/helm Jan 30 15:38:31.189690 update_engine[1440]: I20250130 15:38:31.157922 1440 update_check_scheduler.cc:74] Next update check in 7m47s Jan 30 15:38:31.154197 systemd[1]: Started update-engine.service - Update Engine. Jan 30 15:38:31.162455 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 15:38:31.162636 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 15:38:31.196597 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 15:38:31.196597 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 15:38:31.196597 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 30 15:38:31.226800 extend-filesystems[1428]: Resized filesystem in /dev/vda9 Jan 30 15:38:31.196996 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 15:38:31.197269 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 15:38:31.232824 systemd-logind[1436]: New seat seat0. Jan 30 15:38:31.240540 systemd-logind[1436]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 15:38:31.240566 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 15:38:31.240793 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 15:38:31.261051 bash[1481]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:38:31.266733 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 15:38:31.288462 systemd[1]: Starting sshkeys.service... Jan 30 15:38:31.333976 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 15:38:31.347499 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 15:38:31.461123 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 15:38:31.578160 containerd[1459]: time="2025-01-30T15:38:31.576586031Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 15:38:31.628865 containerd[1459]: time="2025-01-30T15:38:31.628814281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:38:31.636324 containerd[1459]: time="2025-01-30T15:38:31.634341317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:38:31.636324 containerd[1459]: time="2025-01-30T15:38:31.634382803Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 15:38:31.636324 containerd[1459]: time="2025-01-30T15:38:31.634402640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 15:38:31.636324 containerd[1459]: time="2025-01-30T15:38:31.634570758Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 15:38:31.636324 containerd[1459]: time="2025-01-30T15:38:31.634590201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 15:38:31.636324 containerd[1459]: time="2025-01-30T15:38:31.634649827Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:38:31.636324 containerd[1459]: time="2025-01-30T15:38:31.634665470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:38:31.636324 containerd[1459]: time="2025-01-30T15:38:31.634844444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:38:31.636324 containerd[1459]: time="2025-01-30T15:38:31.634864386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 15:38:31.636324 containerd[1459]: time="2025-01-30T15:38:31.634879861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:38:31.636324 containerd[1459]: time="2025-01-30T15:38:31.634891518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 15:38:31.636641 containerd[1459]: time="2025-01-30T15:38:31.634968620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:38:31.636641 containerd[1459]: time="2025-01-30T15:38:31.635187830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:38:31.636641 containerd[1459]: time="2025-01-30T15:38:31.635281814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:38:31.636641 containerd[1459]: time="2025-01-30T15:38:31.635297904Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 15:38:31.636641 containerd[1459]: time="2025-01-30T15:38:31.635374995Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 15:38:31.636641 containerd[1459]: time="2025-01-30T15:38:31.635425817Z" level=info msg="metadata content store policy set" policy=shared Jan 30 15:38:31.648700 containerd[1459]: time="2025-01-30T15:38:31.648672162Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 15:38:31.648846 containerd[1459]: time="2025-01-30T15:38:31.648825543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 15:38:31.649091 containerd[1459]: time="2025-01-30T15:38:31.649073188Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 15:38:31.649577 containerd[1459]: time="2025-01-30T15:38:31.649550076Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 15:38:31.649678 containerd[1459]: time="2025-01-30T15:38:31.649660014Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 15:38:31.650933 containerd[1459]: time="2025-01-30T15:38:31.650911665Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.651593370Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.651723218Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.651753516Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.651778766Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.651805118Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.651829441Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.651852099Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.651876703Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.651896239Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.651918824Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.651941554Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.651962089Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.651991418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.652589 containerd[1459]: time="2025-01-30T15:38:31.652014035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.652905 containerd[1459]: time="2025-01-30T15:38:31.652035507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.652905 containerd[1459]: time="2025-01-30T15:38:31.652059507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.652905 containerd[1459]: time="2025-01-30T15:38:31.652075931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.652905 containerd[1459]: time="2025-01-30T15:38:31.652113669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.655293 containerd[1459]: time="2025-01-30T15:38:31.655252288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.655344 containerd[1459]: time="2025-01-30T15:38:31.655311592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.655370 containerd[1459]: time="2025-01-30T15:38:31.655334468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.655395 containerd[1459]: time="2025-01-30T15:38:31.655362018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.655395 containerd[1459]: time="2025-01-30T15:38:31.655383926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.655437 containerd[1459]: time="2025-01-30T15:38:31.655404107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.655437 containerd[1459]: time="2025-01-30T15:38:31.655423955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.655484 containerd[1459]: time="2025-01-30T15:38:31.655450256Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 15:38:31.655510 containerd[1459]: time="2025-01-30T15:38:31.655483311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.655510 containerd[1459]: time="2025-01-30T15:38:31.655504751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.655552 containerd[1459]: time="2025-01-30T15:38:31.655522715Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 15:38:31.655601 containerd[1459]: time="2025-01-30T15:38:31.655574036Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 15:38:31.655634 containerd[1459]: time="2025-01-30T15:38:31.655605833Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 15:38:31.655634 containerd[1459]: time="2025-01-30T15:38:31.655624962Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 15:38:31.655686 containerd[1459]: time="2025-01-30T15:38:31.655644810Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 15:38:31.655686 containerd[1459]: time="2025-01-30T15:38:31.655662118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.655686 containerd[1459]: time="2025-01-30T15:38:31.655678834Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 15:38:31.655754 containerd[1459]: time="2025-01-30T15:38:31.655695944Z" level=info msg="NRI interface is disabled by configuration." Jan 30 15:38:31.655754 containerd[1459]: time="2025-01-30T15:38:31.655714001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 15:38:31.658296 containerd[1459]: time="2025-01-30T15:38:31.656039008Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 15:38:31.658296 containerd[1459]: time="2025-01-30T15:38:31.658299379Z" level=info msg="Connect containerd service" Jan 30 15:38:31.658490 containerd[1459]: time="2025-01-30T15:38:31.658335297Z" level=info msg="using legacy CRI server" Jan 30 15:38:31.658490 containerd[1459]: time="2025-01-30T15:38:31.658344820Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 15:38:31.658490 containerd[1459]: time="2025-01-30T15:38:31.658430674Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 15:38:31.659119 containerd[1459]: time="2025-01-30T15:38:31.659088701Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:38:31.662068 containerd[1459]: time="2025-01-30T15:38:31.659460835Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 15:38:31.662068 containerd[1459]: time="2025-01-30T15:38:31.659514247Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 15:38:31.662068 containerd[1459]: time="2025-01-30T15:38:31.659899454Z" level=info msg="Start subscribing containerd event" Jan 30 15:38:31.662068 containerd[1459]: time="2025-01-30T15:38:31.659940179Z" level=info msg="Start recovering state" Jan 30 15:38:31.662068 containerd[1459]: time="2025-01-30T15:38:31.659997963Z" level=info msg="Start event monitor" Jan 30 15:38:31.662068 containerd[1459]: time="2025-01-30T15:38:31.660017395Z" level=info msg="Start snapshots syncer" Jan 30 15:38:31.662068 containerd[1459]: time="2025-01-30T15:38:31.660027178Z" level=info msg="Start cni network conf syncer for default" Jan 30 15:38:31.662068 containerd[1459]: time="2025-01-30T15:38:31.660035745Z" level=info msg="Start streaming server" Jan 30 15:38:31.660186 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 15:38:31.664240 containerd[1459]: time="2025-01-30T15:38:31.664200755Z" level=info msg="containerd successfully booted in 0.088728s" Jan 30 15:38:31.809125 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 15:38:31.835291 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 15:38:31.848508 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 15:38:31.861257 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 15:38:31.861474 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 15:38:31.875991 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 15:38:31.890365 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 15:38:31.903928 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 15:38:31.908715 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 15:38:31.911852 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 15:38:31.948275 tar[1449]: linux-amd64/LICENSE Jan 30 15:38:31.948404 tar[1449]: linux-amd64/README.md Jan 30 15:38:31.958462 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 15:38:32.183673 systemd-networkd[1377]: eth0: Gained IPv6LL Jan 30 15:38:32.188950 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 15:38:32.192934 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 15:38:32.205747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:38:32.220989 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 15:38:32.282026 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 15:38:33.227109 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 15:38:33.241992 systemd[1]: Started sshd@0-172.24.4.191:22-172.24.4.1:58354.service - OpenSSH per-connection server daemon (172.24.4.1:58354). Jan 30 15:38:34.452412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:38:34.471362 (kubelet)[1543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:38:34.724462 sshd[1534]: Accepted publickey for core from 172.24.4.1 port 58354 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:38:34.729499 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:38:34.756618 systemd-logind[1436]: New session 1 of user core. Jan 30 15:38:34.757775 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 15:38:34.770552 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 15:38:34.795204 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 15:38:34.805568 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 15:38:34.814651 (systemd)[1550]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 15:38:34.939546 systemd[1550]: Queued start job for default target default.target. Jan 30 15:38:34.951081 systemd[1550]: Created slice app.slice - User Application Slice. Jan 30 15:38:34.951462 systemd[1550]: Reached target paths.target - Paths. Jan 30 15:38:34.951478 systemd[1550]: Reached target timers.target - Timers. Jan 30 15:38:34.955394 systemd[1550]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 15:38:34.963884 systemd[1550]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 15:38:34.964546 systemd[1550]: Reached target sockets.target - Sockets. Jan 30 15:38:34.964564 systemd[1550]: Reached target basic.target - Basic System. Jan 30 15:38:34.964601 systemd[1550]: Reached target default.target - Main User Target. Jan 30 15:38:34.964627 systemd[1550]: Startup finished in 140ms. Jan 30 15:38:34.964779 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 15:38:34.976378 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 15:38:35.432079 systemd[1]: Started sshd@1-172.24.4.191:22-172.24.4.1:32906.service - OpenSSH per-connection server daemon (172.24.4.1:32906). Jan 30 15:38:35.583238 kubelet[1543]: E0130 15:38:35.583188 1543 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:38:35.586020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:38:35.586197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:38:35.586799 systemd[1]: kubelet.service: Consumed 1.974s CPU time. Jan 30 15:38:36.921870 sshd[1561]: Accepted publickey for core from 172.24.4.1 port 32906 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:38:36.923092 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:38:36.942976 systemd-logind[1436]: New session 2 of user core. Jan 30 15:38:36.964081 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 15:38:36.966551 login[1515]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:38:36.974225 login[1516]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:38:36.986215 systemd-logind[1436]: New session 4 of user core. Jan 30 15:38:36.997326 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 15:38:37.001484 systemd-logind[1436]: New session 3 of user core. Jan 30 15:38:37.009342 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 15:38:37.665845 sshd[1561]: pam_unix(sshd:session): session closed for user core Jan 30 15:38:37.678416 systemd[1]: sshd@1-172.24.4.191:22-172.24.4.1:32906.service: Deactivated successfully. Jan 30 15:38:37.682332 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 15:38:37.684191 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. Jan 30 15:38:37.695920 systemd[1]: Started sshd@2-172.24.4.191:22-172.24.4.1:32922.service - OpenSSH per-connection server daemon (172.24.4.1:32922). Jan 30 15:38:37.702556 systemd-logind[1436]: Removed session 2. Jan 30 15:38:38.045771 coreos-metadata[1423]: Jan 30 15:38:38.045 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:38:38.093490 coreos-metadata[1423]: Jan 30 15:38:38.093 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 15:38:38.359786 coreos-metadata[1423]: Jan 30 15:38:38.359 INFO Fetch successful Jan 30 15:38:38.359786 coreos-metadata[1423]: Jan 30 15:38:38.359 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 15:38:38.376159 coreos-metadata[1423]: Jan 30 15:38:38.376 INFO Fetch successful Jan 30 15:38:38.376159 coreos-metadata[1423]: Jan 30 15:38:38.376 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 15:38:38.391042 coreos-metadata[1423]: Jan 30 15:38:38.390 INFO Fetch successful Jan 30 15:38:38.391042 coreos-metadata[1423]: Jan 30 15:38:38.390 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 15:38:38.406230 coreos-metadata[1423]: Jan 30 15:38:38.406 INFO Fetch successful Jan 30 15:38:38.406230 coreos-metadata[1423]: Jan 30 15:38:38.406 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 15:38:38.421781 coreos-metadata[1423]: Jan 30 15:38:38.421 INFO Fetch successful Jan 30 15:38:38.421781 coreos-metadata[1423]: Jan 30 15:38:38.421 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 15:38:38.435739 coreos-metadata[1423]: Jan 30 15:38:38.435 INFO Fetch successful Jan 30 15:38:38.488800 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 15:38:38.490686 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 15:38:38.498877 coreos-metadata[1485]: Jan 30 15:38:38.498 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:38:38.541404 coreos-metadata[1485]: Jan 30 15:38:38.541 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 15:38:38.556466 coreos-metadata[1485]: Jan 30 15:38:38.556 INFO Fetch successful Jan 30 15:38:38.556466 coreos-metadata[1485]: Jan 30 15:38:38.556 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 15:38:38.571276 coreos-metadata[1485]: Jan 30 15:38:38.571 INFO Fetch successful Jan 30 15:38:38.577088 unknown[1485]: wrote ssh authorized keys file for user: core Jan 30 15:38:38.613891 update-ssh-keys[1607]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:38:38.615894 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 15:38:38.618979 systemd[1]: Finished sshkeys.service. Jan 30 15:38:38.623888 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 15:38:38.624196 systemd[1]: Startup finished in 1.118s (kernel) + 15.102s (initrd) + 10.861s (userspace) = 27.082s. Jan 30 15:38:39.209096 sshd[1596]: Accepted publickey for core from 172.24.4.1 port 32922 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:38:39.212604 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:38:39.225262 systemd-logind[1436]: New session 5 of user core. Jan 30 15:38:39.232516 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 15:38:39.700736 sshd[1596]: pam_unix(sshd:session): session closed for user core Jan 30 15:38:39.708830 systemd[1]: sshd@2-172.24.4.191:22-172.24.4.1:32922.service: Deactivated successfully. Jan 30 15:38:39.713440 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 15:38:39.715655 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. Jan 30 15:38:39.718396 systemd-logind[1436]: Removed session 5. Jan 30 15:38:45.809701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 15:38:45.816487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:38:46.157412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:38:46.171081 (kubelet)[1623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:38:46.255453 kubelet[1623]: E0130 15:38:46.255349 1623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:38:46.262694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:38:46.263058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:38:49.798884 systemd[1]: Started sshd@3-172.24.4.191:22-172.24.4.1:42516.service - OpenSSH per-connection server daemon (172.24.4.1:42516). Jan 30 15:38:51.120911 sshd[1632]: Accepted publickey for core from 172.24.4.1 port 42516 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:38:51.124470 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:38:51.133930 systemd-logind[1436]: New session 6 of user core. Jan 30 15:38:51.146393 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 15:38:51.877604 sshd[1632]: pam_unix(sshd:session): session closed for user core Jan 30 15:38:51.889219 systemd[1]: sshd@3-172.24.4.191:22-172.24.4.1:42516.service: Deactivated successfully. Jan 30 15:38:51.893999 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 15:38:51.896456 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. Jan 30 15:38:51.911685 systemd[1]: Started sshd@4-172.24.4.191:22-172.24.4.1:42528.service - OpenSSH per-connection server daemon (172.24.4.1:42528). Jan 30 15:38:51.914577 systemd-logind[1436]: Removed session 6. Jan 30 15:38:53.202406 sshd[1639]: Accepted publickey for core from 172.24.4.1 port 42528 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:38:53.205017 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:38:53.215562 systemd-logind[1436]: New session 7 of user core. Jan 30 15:38:53.224433 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 15:38:53.932335 sshd[1639]: pam_unix(sshd:session): session closed for user core Jan 30 15:38:53.944468 systemd[1]: sshd@4-172.24.4.191:22-172.24.4.1:42528.service: Deactivated successfully. Jan 30 15:38:53.947446 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 15:38:53.951394 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. Jan 30 15:38:53.957657 systemd[1]: Started sshd@5-172.24.4.191:22-172.24.4.1:59248.service - OpenSSH per-connection server daemon (172.24.4.1:59248). Jan 30 15:38:53.959840 systemd-logind[1436]: Removed session 7. Jan 30 15:38:55.296272 sshd[1646]: Accepted publickey for core from 172.24.4.1 port 59248 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:38:55.298954 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:38:55.308363 systemd-logind[1436]: New session 8 of user core. Jan 30 15:38:55.320381 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 15:38:56.126468 sshd[1646]: pam_unix(sshd:session): session closed for user core Jan 30 15:38:56.134651 systemd[1]: sshd@5-172.24.4.191:22-172.24.4.1:59248.service: Deactivated successfully. Jan 30 15:38:56.137620 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 15:38:56.141417 systemd-logind[1436]: Session 8 logged out. Waiting for processes to exit. Jan 30 15:38:56.149682 systemd[1]: Started sshd@6-172.24.4.191:22-172.24.4.1:59260.service - OpenSSH per-connection server daemon (172.24.4.1:59260). Jan 30 15:38:56.152798 systemd-logind[1436]: Removed session 8. Jan 30 15:38:56.309611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 15:38:56.316505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:38:56.635583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:38:56.639513 (kubelet)[1662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:38:56.699063 kubelet[1662]: E0130 15:38:56.698967 1662 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:38:56.702858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:38:56.703218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:38:57.513596 sshd[1653]: Accepted publickey for core from 172.24.4.1 port 59260 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:38:57.516419 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:38:57.527176 systemd-logind[1436]: New session 9 of user core. Jan 30 15:38:57.530436 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 15:38:57.986441 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 15:38:57.987175 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:38:58.546492 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 15:38:58.549215 (dockerd)[1686]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 15:38:59.147426 dockerd[1686]: time="2025-01-30T15:38:59.146995370Z" level=info msg="Starting up" Jan 30 15:38:59.360272 dockerd[1686]: time="2025-01-30T15:38:59.360217833Z" level=info msg="Loading containers: start." Jan 30 15:38:59.523193 kernel: Initializing XFRM netlink socket Jan 30 15:38:59.615116 systemd-networkd[1377]: docker0: Link UP Jan 30 15:38:59.638497 dockerd[1686]: time="2025-01-30T15:38:59.638324392Z" level=info msg="Loading containers: done." Jan 30 15:38:59.663379 dockerd[1686]: time="2025-01-30T15:38:59.662950477Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 15:38:59.663379 dockerd[1686]: time="2025-01-30T15:38:59.663064768Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 15:38:59.663379 dockerd[1686]: time="2025-01-30T15:38:59.663185735Z" level=info msg="Daemon has completed initialization" Jan 30 15:38:59.723536 dockerd[1686]: time="2025-01-30T15:38:59.723400713Z" level=info msg="API listen on /run/docker.sock" Jan 30 15:38:59.723966 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 15:39:01.350817 containerd[1459]: time="2025-01-30T15:39:01.350704100Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 15:39:02.124044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4228826072.mount: Deactivated successfully. Jan 30 15:39:03.766119 containerd[1459]: time="2025-01-30T15:39:03.766033949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:03.767921 containerd[1459]: time="2025-01-30T15:39:03.767615878Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976729" Jan 30 15:39:03.768975 containerd[1459]: time="2025-01-30T15:39:03.768902561Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:03.772317 containerd[1459]: time="2025-01-30T15:39:03.772250890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:03.774212 containerd[1459]: time="2025-01-30T15:39:03.773490676Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.42270685s" Jan 30 15:39:03.774212 containerd[1459]: time="2025-01-30T15:39:03.773531577Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 30 15:39:03.786541 containerd[1459]: time="2025-01-30T15:39:03.786493027Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 15:39:05.718163 containerd[1459]: time="2025-01-30T15:39:05.717985729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:05.719458 containerd[1459]: time="2025-01-30T15:39:05.719394703Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701151" Jan 30 15:39:05.720254 containerd[1459]: time="2025-01-30T15:39:05.720188700Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:05.723515 containerd[1459]: time="2025-01-30T15:39:05.723466617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:05.724785 containerd[1459]: time="2025-01-30T15:39:05.724741867Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.938210504s" Jan 30 15:39:05.724844 containerd[1459]: time="2025-01-30T15:39:05.724784387Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 30 15:39:05.725454 containerd[1459]: time="2025-01-30T15:39:05.725212665Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 15:39:06.809829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 15:39:06.820556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:39:06.995263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:39:07.002951 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:39:07.079631 kubelet[1892]: E0130 15:39:07.079200 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:39:07.082455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:39:07.082651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:39:07.457609 containerd[1459]: time="2025-01-30T15:39:07.457121112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:07.458772 containerd[1459]: time="2025-01-30T15:39:07.458636126Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652061" Jan 30 15:39:07.459685 containerd[1459]: time="2025-01-30T15:39:07.459649150Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:07.463359 containerd[1459]: time="2025-01-30T15:39:07.463272416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:07.464667 containerd[1459]: time="2025-01-30T15:39:07.464520606Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.739277594s" Jan 30 15:39:07.464667 containerd[1459]: time="2025-01-30T15:39:07.464560635Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 30 15:39:07.465530 containerd[1459]: time="2025-01-30T15:39:07.465306734Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 15:39:08.790442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281787770.mount: Deactivated successfully. Jan 30 15:39:09.373412 containerd[1459]: time="2025-01-30T15:39:09.373329081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:09.374433 containerd[1459]: time="2025-01-30T15:39:09.374362098Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231136" Jan 30 15:39:09.375590 containerd[1459]: time="2025-01-30T15:39:09.375531897Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:09.378091 containerd[1459]: time="2025-01-30T15:39:09.378063580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:09.379307 containerd[1459]: time="2025-01-30T15:39:09.378845568Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.91350483s" Jan 30 15:39:09.379307 containerd[1459]: time="2025-01-30T15:39:09.378899398Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 15:39:09.379701 containerd[1459]: time="2025-01-30T15:39:09.379579959Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 15:39:10.029050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3594362175.mount: Deactivated successfully. Jan 30 15:39:11.277163 containerd[1459]: time="2025-01-30T15:39:11.276802456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:11.361095 containerd[1459]: time="2025-01-30T15:39:11.360984036Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 30 15:39:11.364840 containerd[1459]: time="2025-01-30T15:39:11.364753779Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:11.373989 containerd[1459]: time="2025-01-30T15:39:11.373922666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:11.378420 containerd[1459]: time="2025-01-30T15:39:11.378301081Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.998662411s" Jan 30 15:39:11.378420 containerd[1459]: time="2025-01-30T15:39:11.378410981Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 15:39:11.380818 containerd[1459]: time="2025-01-30T15:39:11.380317509Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 15:39:11.983165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount984458229.mount: Deactivated successfully. Jan 30 15:39:11.994678 containerd[1459]: time="2025-01-30T15:39:11.994500602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:11.996420 containerd[1459]: time="2025-01-30T15:39:11.996035530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 30 15:39:11.998001 containerd[1459]: time="2025-01-30T15:39:11.997862740Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:12.003681 containerd[1459]: time="2025-01-30T15:39:12.003495526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:12.005769 containerd[1459]: time="2025-01-30T15:39:12.005495052Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 625.098646ms" Jan 30 15:39:12.005769 containerd[1459]: time="2025-01-30T15:39:12.005574188Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 15:39:12.007162 containerd[1459]: time="2025-01-30T15:39:12.006759205Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 15:39:12.654299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3174362947.mount: Deactivated successfully. Jan 30 15:39:16.211517 containerd[1459]: time="2025-01-30T15:39:16.210794582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:16.214993 containerd[1459]: time="2025-01-30T15:39:16.214803767Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Jan 30 15:39:16.216935 containerd[1459]: time="2025-01-30T15:39:16.216875961Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:16.220831 containerd[1459]: time="2025-01-30T15:39:16.220792665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:16.223026 containerd[1459]: time="2025-01-30T15:39:16.222231943Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.215415935s" Jan 30 15:39:16.223026 containerd[1459]: time="2025-01-30T15:39:16.222307604Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 30 15:39:16.559741 update_engine[1440]: I20250130 15:39:16.558456 1440 update_attempter.cc:509] Updating boot flags... Jan 30 15:39:16.616271 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2025) Jan 30 15:39:16.700134 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2024) Jan 30 15:39:17.085874 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 15:39:17.101156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:39:17.224388 (kubelet)[2052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:39:17.226289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:39:17.270771 kubelet[2052]: E0130 15:39:17.270673 2052 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:39:17.272996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:39:17.273211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:39:20.643168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:39:20.661384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:39:20.696604 systemd[1]: Reloading requested from client PID 2066 ('systemctl') (unit session-9.scope)... Jan 30 15:39:20.696623 systemd[1]: Reloading... Jan 30 15:39:20.792142 zram_generator::config[2101]: No configuration found. Jan 30 15:39:20.946028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:39:21.030056 systemd[1]: Reloading finished in 333 ms. Jan 30 15:39:21.080613 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 15:39:21.080687 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 15:39:21.080947 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:39:21.083558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:39:21.175759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:39:21.186367 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:39:21.474319 kubelet[2172]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:39:21.474319 kubelet[2172]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 15:39:21.474319 kubelet[2172]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:39:21.474319 kubelet[2172]: I0130 15:39:21.227592 2172 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:39:22.412123 kubelet[2172]: I0130 15:39:22.410882 2172 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 15:39:22.412123 kubelet[2172]: I0130 15:39:22.410966 2172 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:39:22.412305 kubelet[2172]: I0130 15:39:22.412083 2172 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 15:39:22.451988 kubelet[2172]: E0130 15:39:22.451948 2172 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.191:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:39:22.460673 kubelet[2172]: I0130 15:39:22.460646 2172 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:39:22.476999 kubelet[2172]: E0130 15:39:22.476944 2172 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 15:39:22.477729 kubelet[2172]: I0130 15:39:22.477702 2172 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 15:39:22.485853 kubelet[2172]: I0130 15:39:22.485694 2172 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:39:22.488605 kubelet[2172]: I0130 15:39:22.488540 2172 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 15:39:22.489710 kubelet[2172]: I0130 15:39:22.488930 2172 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:39:22.489710 kubelet[2172]: I0130 15:39:22.488986 2172 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-e-11fb05fa14.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 15:39:22.489710 kubelet[2172]: I0130 15:39:22.489335 2172 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:39:22.489710 kubelet[2172]: I0130 15:39:22.489354 2172 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 15:39:22.490024 kubelet[2172]: I0130 15:39:22.489504 2172 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:39:22.494429 kubelet[2172]: I0130 15:39:22.494236 2172 kubelet.go:408] "Attempting to sync node with API server" Jan 30 15:39:22.494429 kubelet[2172]: I0130 15:39:22.494276 2172 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:39:22.494429 kubelet[2172]: I0130 15:39:22.494346 2172 kubelet.go:314] "Adding apiserver pod source" Jan 30 15:39:22.494429 kubelet[2172]: I0130 15:39:22.494379 2172 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:39:22.500137 kubelet[2172]: W0130 15:39:22.499466 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-11fb05fa14.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused Jan 30 15:39:22.500137 kubelet[2172]: E0130 15:39:22.499579 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-11fb05fa14.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:39:22.510583 kubelet[2172]: W0130 15:39:22.509731 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused Jan 30 15:39:22.510583 kubelet[2172]: E0130 15:39:22.509855 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:39:22.510583 kubelet[2172]: I0130 15:39:22.510575 2172 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 15:39:22.516091 kubelet[2172]: I0130 15:39:22.516054 2172 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:39:22.517716 kubelet[2172]: W0130 15:39:22.517691 2172 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 15:39:22.522944 kubelet[2172]: I0130 15:39:22.522915 2172 server.go:1269] "Started kubelet" Jan 30 15:39:22.527404 kubelet[2172]: I0130 15:39:22.527329 2172 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:39:22.535021 kubelet[2172]: I0130 15:39:22.533673 2172 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:39:22.536172 kubelet[2172]: I0130 15:39:22.536147 2172 server.go:460] "Adding debug handlers to kubelet server" Jan 30 15:39:22.538507 kubelet[2172]: I0130 15:39:22.538487 2172 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 15:39:22.544949 kubelet[2172]: I0130 15:39:22.540959 2172 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:39:22.545276 kubelet[2172]: I0130 15:39:22.545262 2172 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:39:22.545370 kubelet[2172]: I0130 15:39:22.541330 2172 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 15:39:22.545706 kubelet[2172]: E0130 15:39:22.545640 2172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-11fb05fa14.novalocal?timeout=10s\": dial tcp 172.24.4.191:6443: connect: connection refused" interval="200ms" Jan 30 15:39:22.545706 kubelet[2172]: E0130 15:39:22.541177 2172 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.191:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.191:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-e-11fb05fa14.novalocal.181f829622718a9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-11fb05fa14.novalocal,UID:ci-4081-3-0-e-11fb05fa14.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-11fb05fa14.novalocal,},FirstTimestamp:2025-01-30 15:39:22.522872477 +0000 UTC m=+1.332808768,LastTimestamp:2025-01-30 15:39:22.522872477 +0000 UTC m=+1.332808768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-11fb05fa14.novalocal,}" Jan 30 15:39:22.546353 kubelet[2172]: I0130 15:39:22.546310 2172 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:39:22.549076 kubelet[2172]: I0130 15:39:22.541311 2172 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 15:39:22.549291 kubelet[2172]: E0130 15:39:22.541555 2172 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-11fb05fa14.novalocal\" not found" Jan 30 15:39:22.549955 kubelet[2172]: I0130 15:39:22.549917 2172 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:39:22.549955 kubelet[2172]: I0130 15:39:22.549956 2172 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:39:22.555213 kubelet[2172]: I0130 15:39:22.555195 2172 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:39:22.556553 kubelet[2172]: I0130 15:39:22.556525 2172 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:39:22.557567 kubelet[2172]: I0130 15:39:22.557552 2172 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:39:22.557641 kubelet[2172]: I0130 15:39:22.557632 2172 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 15:39:22.557709 kubelet[2172]: I0130 15:39:22.557700 2172 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 15:39:22.558272 kubelet[2172]: E0130 15:39:22.558233 2172 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:39:22.564336 kubelet[2172]: W0130 15:39:22.564247 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused Jan 30 15:39:22.564489 kubelet[2172]: E0130 15:39:22.564471 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:39:22.565054 kubelet[2172]: W0130 15:39:22.565005 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused Jan 30 15:39:22.565227 kubelet[2172]: E0130 15:39:22.565202 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:39:22.595567 kubelet[2172]: I0130 15:39:22.595546 2172 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 15:39:22.595738 kubelet[2172]: I0130 15:39:22.595724 2172 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 15:39:22.595995 kubelet[2172]: I0130 15:39:22.595792 2172 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:39:22.601844 kubelet[2172]: I0130 15:39:22.601760 2172 policy_none.go:49] "None policy: Start" Jan 30 15:39:22.603428 kubelet[2172]: I0130 15:39:22.602569 2172 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 15:39:22.603428 kubelet[2172]: I0130 15:39:22.602589 2172 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:39:22.623285 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 15:39:22.637257 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 15:39:22.649264 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 15:39:22.650173 kubelet[2172]: E0130 15:39:22.649534 2172 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-11fb05fa14.novalocal\" not found" Jan 30 15:39:22.652300 kubelet[2172]: I0130 15:39:22.652282 2172 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:39:22.652549 kubelet[2172]: I0130 15:39:22.652537 2172 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 15:39:22.652648 kubelet[2172]: I0130 15:39:22.652614 2172 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:39:22.652923 kubelet[2172]: I0130 15:39:22.652910 2172 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:39:22.655963 kubelet[2172]: E0130 15:39:22.655945 2172 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-e-11fb05fa14.novalocal\" not found" Jan 30 15:39:22.671526 systemd[1]: Created slice kubepods-burstable-pod5c92d5a2eafb330ac6ee41159847e658.slice - libcontainer container kubepods-burstable-pod5c92d5a2eafb330ac6ee41159847e658.slice. Jan 30 15:39:22.692165 systemd[1]: Created slice kubepods-burstable-pod5ac08ec302df72946e4f658305b3b97e.slice - libcontainer container kubepods-burstable-pod5ac08ec302df72946e4f658305b3b97e.slice. Jan 30 15:39:22.697351 systemd[1]: Created slice kubepods-burstable-podb99a2ce57744e5ef3985a70669f30fc6.slice - libcontainer container kubepods-burstable-podb99a2ce57744e5ef3985a70669f30fc6.slice. Jan 30 15:39:22.747197 kubelet[2172]: E0130 15:39:22.747020 2172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-11fb05fa14.novalocal?timeout=10s\": dial tcp 172.24.4.191:6443: connect: connection refused" interval="400ms" Jan 30 15:39:22.756182 kubelet[2172]: I0130 15:39:22.756093 2172 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:22.756937 kubelet[2172]: I0130 15:39:22.756424 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c92d5a2eafb330ac6ee41159847e658-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5c92d5a2eafb330ac6ee41159847e658\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:22.756937 kubelet[2172]: I0130 15:39:22.756495 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ac08ec302df72946e4f658305b3b97e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5ac08ec302df72946e4f658305b3b97e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:22.756937 kubelet[2172]: I0130 15:39:22.756536 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ac08ec302df72946e4f658305b3b97e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5ac08ec302df72946e4f658305b3b97e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:22.756937 kubelet[2172]: I0130 15:39:22.756574 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c92d5a2eafb330ac6ee41159847e658-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5c92d5a2eafb330ac6ee41159847e658\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:22.756937 kubelet[2172]: I0130 15:39:22.756618 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c92d5a2eafb330ac6ee41159847e658-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5c92d5a2eafb330ac6ee41159847e658\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:22.757265 kubelet[2172]: I0130 15:39:22.756653 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ac08ec302df72946e4f658305b3b97e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5ac08ec302df72946e4f658305b3b97e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:22.757265 kubelet[2172]: I0130 15:39:22.756686 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ac08ec302df72946e4f658305b3b97e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5ac08ec302df72946e4f658305b3b97e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:22.757265 kubelet[2172]: I0130 15:39:22.756721 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ac08ec302df72946e4f658305b3b97e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5ac08ec302df72946e4f658305b3b97e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:22.757265 kubelet[2172]: I0130 15:39:22.756757 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b99a2ce57744e5ef3985a70669f30fc6-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"b99a2ce57744e5ef3985a70669f30fc6\") " pod="kube-system/kube-scheduler-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:22.757265 kubelet[2172]: E0130 15:39:22.757083 2172 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.191:6443/api/v1/nodes\": dial tcp 172.24.4.191:6443: connect: connection refused" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:22.962827 kubelet[2172]: I0130 15:39:22.962598 2172 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:22.963548 kubelet[2172]: E0130 15:39:22.963432 2172 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.191:6443/api/v1/nodes\": dial tcp 172.24.4.191:6443: connect: connection refused" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:22.991552 containerd[1459]: time="2025-01-30T15:39:22.991408688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal,Uid:5c92d5a2eafb330ac6ee41159847e658,Namespace:kube-system,Attempt:0,}" Jan 30 15:39:22.998688 containerd[1459]: time="2025-01-30T15:39:22.997944394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal,Uid:5ac08ec302df72946e4f658305b3b97e,Namespace:kube-system,Attempt:0,}" Jan 30 15:39:23.002439 containerd[1459]: time="2025-01-30T15:39:23.001856002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-e-11fb05fa14.novalocal,Uid:b99a2ce57744e5ef3985a70669f30fc6,Namespace:kube-system,Attempt:0,}" Jan 30 15:39:23.148655 kubelet[2172]: E0130 15:39:23.148547 2172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-11fb05fa14.novalocal?timeout=10s\": dial tcp 172.24.4.191:6443: connect: connection refused" interval="800ms" Jan 30 15:39:23.367937 kubelet[2172]: I0130 15:39:23.367869 2172 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:23.368595 kubelet[2172]: E0130 15:39:23.368491 2172 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.191:6443/api/v1/nodes\": dial tcp 172.24.4.191:6443: connect: connection refused" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:23.486782 kubelet[2172]: W0130 15:39:23.486607 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused Jan 30 15:39:23.486782 kubelet[2172]: E0130 15:39:23.486698 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:39:23.614028 kubelet[2172]: W0130 15:39:23.613806 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused Jan 30 15:39:23.614028 kubelet[2172]: E0130 15:39:23.613944 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:39:23.661809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165422832.mount: Deactivated successfully. Jan 30 15:39:23.678132 containerd[1459]: time="2025-01-30T15:39:23.678023636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:39:23.679961 containerd[1459]: time="2025-01-30T15:39:23.679773273Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:39:23.681345 containerd[1459]: time="2025-01-30T15:39:23.681288085Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:39:23.683186 containerd[1459]: time="2025-01-30T15:39:23.683041299Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:39:23.685032 containerd[1459]: time="2025-01-30T15:39:23.684960470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 15:39:23.685588 containerd[1459]: time="2025-01-30T15:39:23.685528821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:39:23.686580 containerd[1459]: time="2025-01-30T15:39:23.686474881Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:39:23.690675 containerd[1459]: time="2025-01-30T15:39:23.690562949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:39:23.694443 containerd[1459]: time="2025-01-30T15:39:23.694030053Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 702.444152ms" Jan 30 15:39:23.697185 containerd[1459]: time="2025-01-30T15:39:23.697062033Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 695.067653ms" Jan 30 15:39:23.697901 containerd[1459]: time="2025-01-30T15:39:23.697301016Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 699.11439ms" Jan 30 15:39:23.710527 kubelet[2172]: W0130 15:39:23.710434 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused Jan 30 15:39:23.710682 kubelet[2172]: E0130 15:39:23.710555 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:39:23.911358 kubelet[2172]: W0130 15:39:23.911228 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-11fb05fa14.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused Jan 30 15:39:23.911512 kubelet[2172]: E0130 15:39:23.911364 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-11fb05fa14.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:39:23.914483 containerd[1459]: time="2025-01-30T15:39:23.914217539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:39:23.919013 containerd[1459]: time="2025-01-30T15:39:23.917565137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:39:23.919013 containerd[1459]: time="2025-01-30T15:39:23.917620636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:23.919013 containerd[1459]: time="2025-01-30T15:39:23.917858837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:23.921020 containerd[1459]: time="2025-01-30T15:39:23.920739903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:39:23.921020 containerd[1459]: time="2025-01-30T15:39:23.920860362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:39:23.921020 containerd[1459]: time="2025-01-30T15:39:23.920897612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:23.921318 containerd[1459]: time="2025-01-30T15:39:23.921023431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:23.929926 containerd[1459]: time="2025-01-30T15:39:23.929764780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:39:23.929926 containerd[1459]: time="2025-01-30T15:39:23.929827084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:39:23.929926 containerd[1459]: time="2025-01-30T15:39:23.929843419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:23.930719 containerd[1459]: time="2025-01-30T15:39:23.929919763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:23.946658 systemd[1]: Started cri-containerd-a15c93eafe91d3c4f4887605a52cfab83d8c09fadc02f456a0f445cc08cdcc3a.scope - libcontainer container a15c93eafe91d3c4f4887605a52cfab83d8c09fadc02f456a0f445cc08cdcc3a. Jan 30 15:39:23.949561 kubelet[2172]: E0130 15:39:23.949511 2172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-11fb05fa14.novalocal?timeout=10s\": dial tcp 172.24.4.191:6443: connect: connection refused" interval="1.6s" Jan 30 15:39:23.954727 systemd[1]: Started cri-containerd-0560b0d042b70873ce48f390504168fa04431c7b8bd57f19eb4bedd8368ebcb9.scope - libcontainer container 0560b0d042b70873ce48f390504168fa04431c7b8bd57f19eb4bedd8368ebcb9. Jan 30 15:39:23.970424 systemd[1]: Started cri-containerd-9926cb8f382cfcba5d8c8c80797bf8fea36481314507176b395cc50839ad0aec.scope - libcontainer container 9926cb8f382cfcba5d8c8c80797bf8fea36481314507176b395cc50839ad0aec. Jan 30 15:39:24.022408 containerd[1459]: time="2025-01-30T15:39:24.022369224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-e-11fb05fa14.novalocal,Uid:b99a2ce57744e5ef3985a70669f30fc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a15c93eafe91d3c4f4887605a52cfab83d8c09fadc02f456a0f445cc08cdcc3a\"" Jan 30 15:39:24.032631 containerd[1459]: time="2025-01-30T15:39:24.032531963Z" level=info msg="CreateContainer within sandbox \"a15c93eafe91d3c4f4887605a52cfab83d8c09fadc02f456a0f445cc08cdcc3a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 15:39:24.042147 containerd[1459]: time="2025-01-30T15:39:24.042037800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal,Uid:5ac08ec302df72946e4f658305b3b97e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0560b0d042b70873ce48f390504168fa04431c7b8bd57f19eb4bedd8368ebcb9\"" Jan 30 15:39:24.045184 containerd[1459]: time="2025-01-30T15:39:24.045143231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal,Uid:5c92d5a2eafb330ac6ee41159847e658,Namespace:kube-system,Attempt:0,} returns sandbox id \"9926cb8f382cfcba5d8c8c80797bf8fea36481314507176b395cc50839ad0aec\"" Jan 30 15:39:24.047235 containerd[1459]: time="2025-01-30T15:39:24.047198249Z" level=info msg="CreateContainer within sandbox \"0560b0d042b70873ce48f390504168fa04431c7b8bd57f19eb4bedd8368ebcb9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 15:39:24.048864 containerd[1459]: time="2025-01-30T15:39:24.048243922Z" level=info msg="CreateContainer within sandbox \"9926cb8f382cfcba5d8c8c80797bf8fea36481314507176b395cc50839ad0aec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 15:39:24.095897 containerd[1459]: time="2025-01-30T15:39:24.095847102Z" level=info msg="CreateContainer within sandbox \"a15c93eafe91d3c4f4887605a52cfab83d8c09fadc02f456a0f445cc08cdcc3a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"af0a3a21d6549e9c01ddf9b554ad975edca510ba9fc459c70a73d0420d326122\"" Jan 30 15:39:24.096959 containerd[1459]: time="2025-01-30T15:39:24.096795086Z" level=info msg="StartContainer for \"af0a3a21d6549e9c01ddf9b554ad975edca510ba9fc459c70a73d0420d326122\"" Jan 30 15:39:24.103136 containerd[1459]: time="2025-01-30T15:39:24.102935116Z" level=info msg="CreateContainer within sandbox \"9926cb8f382cfcba5d8c8c80797bf8fea36481314507176b395cc50839ad0aec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cabf2135dabb11f19a48d229af486c77efd7b2407102e3070b3ce7392343235d\"" Jan 30 15:39:24.107521 containerd[1459]: time="2025-01-30T15:39:24.107492697Z" level=info msg="StartContainer for \"cabf2135dabb11f19a48d229af486c77efd7b2407102e3070b3ce7392343235d\"" Jan 30 15:39:24.113480 containerd[1459]: time="2025-01-30T15:39:24.113413609Z" level=info msg="CreateContainer within sandbox \"0560b0d042b70873ce48f390504168fa04431c7b8bd57f19eb4bedd8368ebcb9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5993fddd050dbdbe521e6b407977bd9cb5fa1252461fd2cdd2ffea13776327bc\"" Jan 30 15:39:24.114886 containerd[1459]: time="2025-01-30T15:39:24.114020645Z" level=info msg="StartContainer for \"5993fddd050dbdbe521e6b407977bd9cb5fa1252461fd2cdd2ffea13776327bc\"" Jan 30 15:39:24.127434 systemd[1]: Started cri-containerd-af0a3a21d6549e9c01ddf9b554ad975edca510ba9fc459c70a73d0420d326122.scope - libcontainer container af0a3a21d6549e9c01ddf9b554ad975edca510ba9fc459c70a73d0420d326122. Jan 30 15:39:24.156401 systemd[1]: Started cri-containerd-cabf2135dabb11f19a48d229af486c77efd7b2407102e3070b3ce7392343235d.scope - libcontainer container cabf2135dabb11f19a48d229af486c77efd7b2407102e3070b3ce7392343235d. Jan 30 15:39:24.174407 kubelet[2172]: I0130 15:39:24.174307 2172 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:24.174357 systemd[1]: Started cri-containerd-5993fddd050dbdbe521e6b407977bd9cb5fa1252461fd2cdd2ffea13776327bc.scope - libcontainer container 5993fddd050dbdbe521e6b407977bd9cb5fa1252461fd2cdd2ffea13776327bc. Jan 30 15:39:24.175070 kubelet[2172]: E0130 15:39:24.175032 2172 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.191:6443/api/v1/nodes\": dial tcp 172.24.4.191:6443: connect: connection refused" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:24.205637 containerd[1459]: time="2025-01-30T15:39:24.205582745Z" level=info msg="StartContainer for \"af0a3a21d6549e9c01ddf9b554ad975edca510ba9fc459c70a73d0420d326122\" returns successfully" Jan 30 15:39:24.242865 containerd[1459]: time="2025-01-30T15:39:24.241899116Z" level=info msg="StartContainer for \"cabf2135dabb11f19a48d229af486c77efd7b2407102e3070b3ce7392343235d\" returns successfully" Jan 30 15:39:24.281667 containerd[1459]: time="2025-01-30T15:39:24.281606548Z" level=info msg="StartContainer for \"5993fddd050dbdbe521e6b407977bd9cb5fa1252461fd2cdd2ffea13776327bc\" returns successfully" Jan 30 15:39:25.778175 kubelet[2172]: I0130 15:39:25.778088 2172 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:26.096198 kubelet[2172]: E0130 15:39:26.096156 2172 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-e-11fb05fa14.novalocal\" not found" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:26.130638 kubelet[2172]: E0130 15:39:26.130522 2172 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-e-11fb05fa14.novalocal.181f829622718a9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-11fb05fa14.novalocal,UID:ci-4081-3-0-e-11fb05fa14.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-11fb05fa14.novalocal,},FirstTimestamp:2025-01-30 15:39:22.522872477 +0000 UTC m=+1.332808768,LastTimestamp:2025-01-30 15:39:22.522872477 +0000 UTC m=+1.332808768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-11fb05fa14.novalocal,}" Jan 30 15:39:26.194355 kubelet[2172]: I0130 15:39:26.194302 2172 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:26.194491 kubelet[2172]: E0130 15:39:26.194369 2172 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-0-e-11fb05fa14.novalocal\": node \"ci-4081-3-0-e-11fb05fa14.novalocal\" not found" Jan 30 15:39:26.198112 kubelet[2172]: E0130 15:39:26.196544 2172 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-e-11fb05fa14.novalocal.181f829626bca26a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-11fb05fa14.novalocal,UID:ci-4081-3-0-e-11fb05fa14.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081-3-0-e-11fb05fa14.novalocal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-11fb05fa14.novalocal,},FirstTimestamp:2025-01-30 15:39:22.594902634 +0000 UTC m=+1.404838895,LastTimestamp:2025-01-30 15:39:22.594902634 +0000 UTC m=+1.404838895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-11fb05fa14.novalocal,}" Jan 30 15:39:26.234723 kubelet[2172]: E0130 15:39:26.234665 2172 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-11fb05fa14.novalocal\" not found" Jan 30 15:39:26.262228 kubelet[2172]: E0130 15:39:26.262010 2172 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-e-11fb05fa14.novalocal.181f829626bcb8ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-11fb05fa14.novalocal,UID:ci-4081-3-0-e-11fb05fa14.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4081-3-0-e-11fb05fa14.novalocal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-11fb05fa14.novalocal,},FirstTimestamp:2025-01-30 15:39:22.594908396 +0000 UTC m=+1.404844667,LastTimestamp:2025-01-30 15:39:22.594908396 +0000 UTC m=+1.404844667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-11fb05fa14.novalocal,}" Jan 30 15:39:26.318716 kubelet[2172]: E0130 15:39:26.318423 2172 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-e-11fb05fa14.novalocal.181f829626bcc682 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-11fb05fa14.novalocal,UID:ci-4081-3-0-e-11fb05fa14.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ci-4081-3-0-e-11fb05fa14.novalocal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-11fb05fa14.novalocal,},FirstTimestamp:2025-01-30 15:39:22.594911874 +0000 UTC m=+1.404848145,LastTimestamp:2025-01-30 15:39:22.594911874 +0000 UTC m=+1.404848145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-11fb05fa14.novalocal,}" Jan 30 15:39:26.335546 kubelet[2172]: E0130 15:39:26.335486 2172 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-11fb05fa14.novalocal\" not found" Jan 30 15:39:26.374437 kubelet[2172]: E0130 15:39:26.374067 2172 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-e-11fb05fa14.novalocal.181f82962a8e727f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-11fb05fa14.novalocal,UID:ci-4081-3-0-e-11fb05fa14.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-11fb05fa14.novalocal,},FirstTimestamp:2025-01-30 15:39:22.658984575 +0000 UTC m=+1.468920836,LastTimestamp:2025-01-30 15:39:22.658984575 +0000 UTC m=+1.468920836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-11fb05fa14.novalocal,}" Jan 30 15:39:26.435697 kubelet[2172]: E0130 15:39:26.435656 2172 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-11fb05fa14.novalocal\" not found" Jan 30 15:39:26.536793 kubelet[2172]: E0130 15:39:26.536748 2172 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-11fb05fa14.novalocal\" not found" Jan 30 15:39:27.502586 kubelet[2172]: I0130 15:39:27.502528 2172 apiserver.go:52] "Watching apiserver" Jan 30 15:39:27.547759 kubelet[2172]: I0130 15:39:27.547614 2172 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 15:39:28.052873 kubelet[2172]: W0130 15:39:28.052794 2172 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:39:29.146572 systemd[1]: Reloading requested from client PID 2445 ('systemctl') (unit session-9.scope)... Jan 30 15:39:29.146590 systemd[1]: Reloading... Jan 30 15:39:29.247224 zram_generator::config[2484]: No configuration found. Jan 30 15:39:29.396834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:39:29.503048 systemd[1]: Reloading finished in 356 ms. Jan 30 15:39:29.540977 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:39:29.552373 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 15:39:29.552629 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:39:29.552684 systemd[1]: kubelet.service: Consumed 1.629s CPU time, 119.6M memory peak, 0B memory swap peak. Jan 30 15:39:29.564401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:39:29.684346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:39:29.699422 (kubelet)[2548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:39:29.752344 kubelet[2548]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:39:29.752344 kubelet[2548]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 15:39:29.752344 kubelet[2548]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:39:29.752344 kubelet[2548]: I0130 15:39:29.749620 2548 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:39:29.759130 kubelet[2548]: I0130 15:39:29.759040 2548 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 15:39:29.759130 kubelet[2548]: I0130 15:39:29.759068 2548 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:39:29.759524 kubelet[2548]: I0130 15:39:29.759362 2548 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 15:39:29.761227 kubelet[2548]: I0130 15:39:29.761202 2548 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 15:39:29.763645 kubelet[2548]: I0130 15:39:29.763613 2548 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:39:29.768381 kubelet[2548]: E0130 15:39:29.768344 2548 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 15:39:29.768381 kubelet[2548]: I0130 15:39:29.768377 2548 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 15:39:29.772106 kubelet[2548]: I0130 15:39:29.771834 2548 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:39:29.772106 kubelet[2548]: I0130 15:39:29.771950 2548 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 15:39:29.772106 kubelet[2548]: I0130 15:39:29.772049 2548 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:39:29.772421 kubelet[2548]: I0130 15:39:29.772075 2548 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-e-11fb05fa14.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 15:39:29.772421 kubelet[2548]: I0130 15:39:29.772416 2548 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:39:29.772540 kubelet[2548]: I0130 15:39:29.772429 2548 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 15:39:29.772540 kubelet[2548]: I0130 15:39:29.772459 2548 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:39:29.772591 kubelet[2548]: I0130 15:39:29.772548 2548 kubelet.go:408] "Attempting to sync node with API server" Jan 30 15:39:29.772591 kubelet[2548]: I0130 15:39:29.772562 2548 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:39:29.772591 kubelet[2548]: I0130 15:39:29.772587 2548 kubelet.go:314] "Adding apiserver pod source" Jan 30 15:39:29.772658 kubelet[2548]: I0130 15:39:29.772601 2548 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:39:29.778871 kubelet[2548]: I0130 15:39:29.777187 2548 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 15:39:29.778871 kubelet[2548]: I0130 15:39:29.777739 2548 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:39:29.780124 kubelet[2548]: I0130 15:39:29.779535 2548 server.go:1269] "Started kubelet" Jan 30 15:39:29.784645 kubelet[2548]: I0130 15:39:29.784620 2548 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:39:29.789139 kubelet[2548]: I0130 15:39:29.788938 2548 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:39:29.789710 kubelet[2548]: I0130 15:39:29.789648 2548 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:39:29.789945 kubelet[2548]: I0130 15:39:29.789923 2548 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:39:29.790347 kubelet[2548]: I0130 15:39:29.790325 2548 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 15:39:29.791706 kubelet[2548]: I0130 15:39:29.791689 2548 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 15:39:29.791870 kubelet[2548]: E0130 15:39:29.791849 2548 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-e-11fb05fa14.novalocal\" not found" Jan 30 15:39:29.800798 kubelet[2548]: I0130 15:39:29.796751 2548 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:39:29.800798 kubelet[2548]: I0130 15:39:29.798254 2548 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:39:29.800798 kubelet[2548]: I0130 15:39:29.799659 2548 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:39:29.806113 kubelet[2548]: I0130 15:39:29.802971 2548 server.go:460] "Adding debug handlers to kubelet server" Jan 30 15:39:29.806113 kubelet[2548]: I0130 15:39:29.803968 2548 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 15:39:29.806113 kubelet[2548]: I0130 15:39:29.804087 2548 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:39:29.806873 kubelet[2548]: I0130 15:39:29.806825 2548 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:39:29.808132 kubelet[2548]: I0130 15:39:29.807775 2548 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:39:29.808132 kubelet[2548]: I0130 15:39:29.807800 2548 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 15:39:29.808132 kubelet[2548]: I0130 15:39:29.807819 2548 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 15:39:29.808132 kubelet[2548]: E0130 15:39:29.807856 2548 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:39:29.862678 kubelet[2548]: I0130 15:39:29.862040 2548 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 15:39:29.862678 kubelet[2548]: I0130 15:39:29.862062 2548 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 15:39:29.862678 kubelet[2548]: I0130 15:39:29.862118 2548 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:39:29.862678 kubelet[2548]: I0130 15:39:29.862304 2548 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 15:39:29.862678 kubelet[2548]: I0130 15:39:29.862316 2548 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 15:39:29.862678 kubelet[2548]: I0130 15:39:29.862338 2548 policy_none.go:49] "None policy: Start" Jan 30 15:39:29.862943 kubelet[2548]: I0130 15:39:29.862884 2548 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 15:39:29.862943 kubelet[2548]: I0130 15:39:29.862932 2548 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:39:29.863911 kubelet[2548]: I0130 15:39:29.863136 2548 state_mem.go:75] "Updated machine memory state" Jan 30 15:39:29.869343 kubelet[2548]: I0130 15:39:29.869295 2548 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:39:29.869541 kubelet[2548]: I0130 15:39:29.869517 2548 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 15:39:29.869699 kubelet[2548]: I0130 15:39:29.869651 2548 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:39:29.871117 kubelet[2548]: I0130 15:39:29.871084 2548 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:39:29.978582 kubelet[2548]: I0130 15:39:29.978381 2548 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:30.015236 kubelet[2548]: W0130 15:39:30.014488 2548 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:39:30.015236 kubelet[2548]: W0130 15:39:30.014615 2548 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:39:30.019948 kubelet[2548]: W0130 15:39:30.019910 2548 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:39:30.020405 kubelet[2548]: E0130 15:39:30.020337 2548 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:30.036096 kubelet[2548]: I0130 15:39:30.036023 2548 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:30.036431 kubelet[2548]: I0130 15:39:30.036221 2548 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:30.106263 kubelet[2548]: I0130 15:39:30.106068 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b99a2ce57744e5ef3985a70669f30fc6-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"b99a2ce57744e5ef3985a70669f30fc6\") " pod="kube-system/kube-scheduler-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:30.106263 kubelet[2548]: I0130 15:39:30.106168 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c92d5a2eafb330ac6ee41159847e658-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5c92d5a2eafb330ac6ee41159847e658\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:30.106263 kubelet[2548]: I0130 15:39:30.106197 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ac08ec302df72946e4f658305b3b97e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5ac08ec302df72946e4f658305b3b97e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:30.106263 kubelet[2548]: I0130 15:39:30.106226 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c92d5a2eafb330ac6ee41159847e658-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5c92d5a2eafb330ac6ee41159847e658\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:30.106263 kubelet[2548]: I0130 15:39:30.106249 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c92d5a2eafb330ac6ee41159847e658-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5c92d5a2eafb330ac6ee41159847e658\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:30.106689 kubelet[2548]: I0130 15:39:30.106268 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ac08ec302df72946e4f658305b3b97e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5ac08ec302df72946e4f658305b3b97e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:30.106689 kubelet[2548]: I0130 15:39:30.106286 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ac08ec302df72946e4f658305b3b97e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5ac08ec302df72946e4f658305b3b97e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:30.106689 kubelet[2548]: I0130 15:39:30.106308 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ac08ec302df72946e4f658305b3b97e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5ac08ec302df72946e4f658305b3b97e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:30.106689 kubelet[2548]: I0130 15:39:30.106331 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ac08ec302df72946e4f658305b3b97e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal\" (UID: \"5ac08ec302df72946e4f658305b3b97e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal" Jan 30 15:39:30.775500 kubelet[2548]: I0130 15:39:30.773579 2548 apiserver.go:52] "Watching apiserver" Jan 30 15:39:30.804348 kubelet[2548]: I0130 15:39:30.804269 2548 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 15:39:30.880571 kubelet[2548]: I0130 15:39:30.879523 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-e-11fb05fa14.novalocal" podStartSLOduration=1.879502277 podStartE2EDuration="1.879502277s" podCreationTimestamp="2025-01-30 15:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:39:30.864249072 +0000 UTC m=+1.159214945" watchObservedRunningTime="2025-01-30 15:39:30.879502277 +0000 UTC m=+1.174468110" Jan 30 15:39:30.893324 kubelet[2548]: I0130 15:39:30.892648 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-e-11fb05fa14.novalocal" podStartSLOduration=2.892625865 podStartE2EDuration="2.892625865s" podCreationTimestamp="2025-01-30 15:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:39:30.880400253 +0000 UTC m=+1.175366076" watchObservedRunningTime="2025-01-30 15:39:30.892625865 +0000 UTC m=+1.187591698" Jan 30 15:39:30.907569 kubelet[2548]: I0130 15:39:30.907204 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-e-11fb05fa14.novalocal" podStartSLOduration=1.9071855279999999 podStartE2EDuration="1.907185528s" podCreationTimestamp="2025-01-30 15:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:39:30.893631946 +0000 UTC m=+1.188597779" watchObservedRunningTime="2025-01-30 15:39:30.907185528 +0000 UTC m=+1.202151361" Jan 30 15:39:31.877272 sudo[1671]: pam_unix(sudo:session): session closed for user root Jan 30 15:39:32.019884 sshd[1653]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:32.026864 systemd[1]: sshd@6-172.24.4.191:22-172.24.4.1:59260.service: Deactivated successfully. Jan 30 15:39:32.030945 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 15:39:32.031383 systemd[1]: session-9.scope: Consumed 6.548s CPU time, 161.6M memory peak, 0B memory swap peak. Jan 30 15:39:32.032575 systemd-logind[1436]: Session 9 logged out. Waiting for processes to exit. Jan 30 15:39:32.035390 systemd-logind[1436]: Removed session 9. Jan 30 15:39:33.554514 kubelet[2548]: I0130 15:39:33.554461 2548 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 15:39:33.555813 containerd[1459]: time="2025-01-30T15:39:33.555744053Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 15:39:33.556611 kubelet[2548]: I0130 15:39:33.556541 2548 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 15:39:34.323984 systemd[1]: Created slice kubepods-besteffort-pod0e4899b5_a622_42c7_bad6_66a47a340726.slice - libcontainer container kubepods-besteffort-pod0e4899b5_a622_42c7_bad6_66a47a340726.slice. Jan 30 15:39:34.337260 kubelet[2548]: I0130 15:39:34.337222 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e4899b5-a622-42c7-bad6-66a47a340726-lib-modules\") pod \"kube-proxy-bwrzp\" (UID: \"0e4899b5-a622-42c7-bad6-66a47a340726\") " pod="kube-system/kube-proxy-bwrzp" Jan 30 15:39:34.337260 kubelet[2548]: I0130 15:39:34.337264 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqcl6\" (UniqueName: \"kubernetes.io/projected/0e4899b5-a622-42c7-bad6-66a47a340726-kube-api-access-pqcl6\") pod \"kube-proxy-bwrzp\" (UID: \"0e4899b5-a622-42c7-bad6-66a47a340726\") " pod="kube-system/kube-proxy-bwrzp" Jan 30 15:39:34.337492 kubelet[2548]: I0130 15:39:34.337292 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0e4899b5-a622-42c7-bad6-66a47a340726-kube-proxy\") pod \"kube-proxy-bwrzp\" (UID: \"0e4899b5-a622-42c7-bad6-66a47a340726\") " pod="kube-system/kube-proxy-bwrzp" Jan 30 15:39:34.337492 kubelet[2548]: I0130 15:39:34.337312 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e4899b5-a622-42c7-bad6-66a47a340726-xtables-lock\") pod \"kube-proxy-bwrzp\" (UID: \"0e4899b5-a622-42c7-bad6-66a47a340726\") " pod="kube-system/kube-proxy-bwrzp" Jan 30 15:39:34.343740 systemd[1]: Created slice kubepods-burstable-podcf843896_71ac_4f48_ab68_55bd038dcf4c.slice - libcontainer container kubepods-burstable-podcf843896_71ac_4f48_ab68_55bd038dcf4c.slice. Jan 30 15:39:34.438175 kubelet[2548]: I0130 15:39:34.438124 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/cf843896-71ac-4f48-ab68-55bd038dcf4c-cni\") pod \"kube-flannel-ds-bcmdt\" (UID: \"cf843896-71ac-4f48-ab68-55bd038dcf4c\") " pod="kube-flannel/kube-flannel-ds-bcmdt" Jan 30 15:39:34.438511 kubelet[2548]: I0130 15:39:34.438468 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf843896-71ac-4f48-ab68-55bd038dcf4c-xtables-lock\") pod \"kube-flannel-ds-bcmdt\" (UID: \"cf843896-71ac-4f48-ab68-55bd038dcf4c\") " pod="kube-flannel/kube-flannel-ds-bcmdt" Jan 30 15:39:34.438767 kubelet[2548]: I0130 15:39:34.438631 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cf843896-71ac-4f48-ab68-55bd038dcf4c-run\") pod \"kube-flannel-ds-bcmdt\" (UID: \"cf843896-71ac-4f48-ab68-55bd038dcf4c\") " pod="kube-flannel/kube-flannel-ds-bcmdt" Jan 30 15:39:34.438886 kubelet[2548]: I0130 15:39:34.438656 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gwkj\" (UniqueName: \"kubernetes.io/projected/cf843896-71ac-4f48-ab68-55bd038dcf4c-kube-api-access-6gwkj\") pod \"kube-flannel-ds-bcmdt\" (UID: \"cf843896-71ac-4f48-ab68-55bd038dcf4c\") " pod="kube-flannel/kube-flannel-ds-bcmdt" Jan 30 15:39:34.439798 kubelet[2548]: I0130 15:39:34.439147 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/cf843896-71ac-4f48-ab68-55bd038dcf4c-cni-plugin\") pod \"kube-flannel-ds-bcmdt\" (UID: \"cf843896-71ac-4f48-ab68-55bd038dcf4c\") " pod="kube-flannel/kube-flannel-ds-bcmdt" Jan 30 15:39:34.439798 kubelet[2548]: I0130 15:39:34.439175 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/cf843896-71ac-4f48-ab68-55bd038dcf4c-flannel-cfg\") pod \"kube-flannel-ds-bcmdt\" (UID: \"cf843896-71ac-4f48-ab68-55bd038dcf4c\") " pod="kube-flannel/kube-flannel-ds-bcmdt" Jan 30 15:39:34.448320 kubelet[2548]: E0130 15:39:34.448274 2548 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 15:39:34.448320 kubelet[2548]: E0130 15:39:34.448308 2548 projected.go:194] Error preparing data for projected volume kube-api-access-pqcl6 for pod kube-system/kube-proxy-bwrzp: configmap "kube-root-ca.crt" not found Jan 30 15:39:34.448478 kubelet[2548]: E0130 15:39:34.448374 2548 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e4899b5-a622-42c7-bad6-66a47a340726-kube-api-access-pqcl6 podName:0e4899b5-a622-42c7-bad6-66a47a340726 nodeName:}" failed. No retries permitted until 2025-01-30 15:39:34.948352067 +0000 UTC m=+5.243317890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pqcl6" (UniqueName: "kubernetes.io/projected/0e4899b5-a622-42c7-bad6-66a47a340726-kube-api-access-pqcl6") pod "kube-proxy-bwrzp" (UID: "0e4899b5-a622-42c7-bad6-66a47a340726") : configmap "kube-root-ca.crt" not found Jan 30 15:39:34.649635 containerd[1459]: time="2025-01-30T15:39:34.648587135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-bcmdt,Uid:cf843896-71ac-4f48-ab68-55bd038dcf4c,Namespace:kube-flannel,Attempt:0,}" Jan 30 15:39:34.713808 containerd[1459]: time="2025-01-30T15:39:34.713306808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:39:34.713808 containerd[1459]: time="2025-01-30T15:39:34.713456365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:39:34.713808 containerd[1459]: time="2025-01-30T15:39:34.713523893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:34.713808 containerd[1459]: time="2025-01-30T15:39:34.713699233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:34.750296 systemd[1]: Started cri-containerd-98028a16c541c260d92347552471a6cbc74f925321621e116fa34f81164b012a.scope - libcontainer container 98028a16c541c260d92347552471a6cbc74f925321621e116fa34f81164b012a. Jan 30 15:39:34.790205 containerd[1459]: time="2025-01-30T15:39:34.790082610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-bcmdt,Uid:cf843896-71ac-4f48-ab68-55bd038dcf4c,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"98028a16c541c260d92347552471a6cbc74f925321621e116fa34f81164b012a\"" Jan 30 15:39:34.792461 containerd[1459]: time="2025-01-30T15:39:34.792408398Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 30 15:39:35.236262 containerd[1459]: time="2025-01-30T15:39:35.236187979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bwrzp,Uid:0e4899b5-a622-42c7-bad6-66a47a340726,Namespace:kube-system,Attempt:0,}" Jan 30 15:39:35.284383 containerd[1459]: time="2025-01-30T15:39:35.283878428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:39:35.284941 containerd[1459]: time="2025-01-30T15:39:35.284295621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:39:35.285370 containerd[1459]: time="2025-01-30T15:39:35.285263570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:35.286414 containerd[1459]: time="2025-01-30T15:39:35.286096804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:35.327553 systemd[1]: Started cri-containerd-05ecefd9663f1f5e56645f9516efb4a3e8151bdf95ebd62d56d4d90525843118.scope - libcontainer container 05ecefd9663f1f5e56645f9516efb4a3e8151bdf95ebd62d56d4d90525843118. Jan 30 15:39:35.370637 containerd[1459]: time="2025-01-30T15:39:35.369862637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bwrzp,Uid:0e4899b5-a622-42c7-bad6-66a47a340726,Namespace:kube-system,Attempt:0,} returns sandbox id \"05ecefd9663f1f5e56645f9516efb4a3e8151bdf95ebd62d56d4d90525843118\"" Jan 30 15:39:35.374502 containerd[1459]: time="2025-01-30T15:39:35.374375332Z" level=info msg="CreateContainer within sandbox \"05ecefd9663f1f5e56645f9516efb4a3e8151bdf95ebd62d56d4d90525843118\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 15:39:35.402369 containerd[1459]: time="2025-01-30T15:39:35.402326376Z" level=info msg="CreateContainer within sandbox \"05ecefd9663f1f5e56645f9516efb4a3e8151bdf95ebd62d56d4d90525843118\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6424b93d68738affd6e7dace2c6faae2e29a22bcb5360b3ad8271d9979805bd9\"" Jan 30 15:39:35.403666 containerd[1459]: time="2025-01-30T15:39:35.403584869Z" level=info msg="StartContainer for \"6424b93d68738affd6e7dace2c6faae2e29a22bcb5360b3ad8271d9979805bd9\"" Jan 30 15:39:35.436547 systemd[1]: Started cri-containerd-6424b93d68738affd6e7dace2c6faae2e29a22bcb5360b3ad8271d9979805bd9.scope - libcontainer container 6424b93d68738affd6e7dace2c6faae2e29a22bcb5360b3ad8271d9979805bd9. Jan 30 15:39:35.488412 containerd[1459]: time="2025-01-30T15:39:35.487970860Z" level=info msg="StartContainer for \"6424b93d68738affd6e7dace2c6faae2e29a22bcb5360b3ad8271d9979805bd9\" returns successfully" Jan 30 15:39:36.778537 kubelet[2548]: I0130 15:39:36.778037 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bwrzp" podStartSLOduration=2.777969558 podStartE2EDuration="2.777969558s" podCreationTimestamp="2025-01-30 15:39:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:39:35.880598098 +0000 UTC m=+6.175563971" watchObservedRunningTime="2025-01-30 15:39:36.777969558 +0000 UTC m=+7.072935441" Jan 30 15:39:37.025576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746047328.mount: Deactivated successfully. Jan 30 15:39:37.087159 containerd[1459]: time="2025-01-30T15:39:37.086926101Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:37.088433 containerd[1459]: time="2025-01-30T15:39:37.088388035Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" Jan 30 15:39:37.089814 containerd[1459]: time="2025-01-30T15:39:37.089776117Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:37.092136 containerd[1459]: time="2025-01-30T15:39:37.092053820Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:37.092870 containerd[1459]: time="2025-01-30T15:39:37.092834898Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.300389845s" Jan 30 15:39:37.092916 containerd[1459]: time="2025-01-30T15:39:37.092869770Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 30 15:39:37.096770 containerd[1459]: time="2025-01-30T15:39:37.096637602Z" level=info msg="CreateContainer within sandbox \"98028a16c541c260d92347552471a6cbc74f925321621e116fa34f81164b012a\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 30 15:39:37.128806 containerd[1459]: time="2025-01-30T15:39:37.128703558Z" level=info msg="CreateContainer within sandbox \"98028a16c541c260d92347552471a6cbc74f925321621e116fa34f81164b012a\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"fbf58d89bb70abe859583a32a5f9992b58aff778d4acf60c9bd15e158e248d2c\"" Jan 30 15:39:37.130276 containerd[1459]: time="2025-01-30T15:39:37.129411618Z" level=info msg="StartContainer for \"fbf58d89bb70abe859583a32a5f9992b58aff778d4acf60c9bd15e158e248d2c\"" Jan 30 15:39:37.160269 systemd[1]: Started cri-containerd-fbf58d89bb70abe859583a32a5f9992b58aff778d4acf60c9bd15e158e248d2c.scope - libcontainer container fbf58d89bb70abe859583a32a5f9992b58aff778d4acf60c9bd15e158e248d2c. Jan 30 15:39:37.185173 systemd[1]: cri-containerd-fbf58d89bb70abe859583a32a5f9992b58aff778d4acf60c9bd15e158e248d2c.scope: Deactivated successfully. Jan 30 15:39:37.186701 containerd[1459]: time="2025-01-30T15:39:37.186664719Z" level=info msg="StartContainer for \"fbf58d89bb70abe859583a32a5f9992b58aff778d4acf60c9bd15e158e248d2c\" returns successfully" Jan 30 15:39:37.377373 containerd[1459]: time="2025-01-30T15:39:37.377130127Z" level=info msg="shim disconnected" id=fbf58d89bb70abe859583a32a5f9992b58aff778d4acf60c9bd15e158e248d2c namespace=k8s.io Jan 30 15:39:37.377373 containerd[1459]: time="2025-01-30T15:39:37.377231023Z" level=warning msg="cleaning up after shim disconnected" id=fbf58d89bb70abe859583a32a5f9992b58aff778d4acf60c9bd15e158e248d2c namespace=k8s.io Jan 30 15:39:37.377373 containerd[1459]: time="2025-01-30T15:39:37.377252236Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:39:37.872266 containerd[1459]: time="2025-01-30T15:39:37.872057511Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 30 15:39:37.918543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbf58d89bb70abe859583a32a5f9992b58aff778d4acf60c9bd15e158e248d2c-rootfs.mount: Deactivated successfully. Jan 30 15:39:40.189846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632790704.mount: Deactivated successfully. Jan 30 15:39:41.297129 containerd[1459]: time="2025-01-30T15:39:41.297048108Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:41.298822 containerd[1459]: time="2025-01-30T15:39:41.298768417Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 30 15:39:41.301038 containerd[1459]: time="2025-01-30T15:39:41.300974996Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:41.310865 containerd[1459]: time="2025-01-30T15:39:41.310756738Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:39:41.312999 containerd[1459]: time="2025-01-30T15:39:41.312798405Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.439548871s" Jan 30 15:39:41.312999 containerd[1459]: time="2025-01-30T15:39:41.312880791Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 30 15:39:41.319577 containerd[1459]: time="2025-01-30T15:39:41.319488865Z" level=info msg="CreateContainer within sandbox \"98028a16c541c260d92347552471a6cbc74f925321621e116fa34f81164b012a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 15:39:41.345809 containerd[1459]: time="2025-01-30T15:39:41.345722949Z" level=info msg="CreateContainer within sandbox \"98028a16c541c260d92347552471a6cbc74f925321621e116fa34f81164b012a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"377e5f728beccea83ab095d823f03a90e9fec725ec0a1062d3c9f22eb9d864bb\"" Jan 30 15:39:41.347971 containerd[1459]: time="2025-01-30T15:39:41.346958851Z" level=info msg="StartContainer for \"377e5f728beccea83ab095d823f03a90e9fec725ec0a1062d3c9f22eb9d864bb\"" Jan 30 15:39:41.389333 systemd[1]: Started cri-containerd-377e5f728beccea83ab095d823f03a90e9fec725ec0a1062d3c9f22eb9d864bb.scope - libcontainer container 377e5f728beccea83ab095d823f03a90e9fec725ec0a1062d3c9f22eb9d864bb. Jan 30 15:39:41.415445 systemd[1]: cri-containerd-377e5f728beccea83ab095d823f03a90e9fec725ec0a1062d3c9f22eb9d864bb.scope: Deactivated successfully. Jan 30 15:39:41.421551 containerd[1459]: time="2025-01-30T15:39:41.420888962Z" level=info msg="StartContainer for \"377e5f728beccea83ab095d823f03a90e9fec725ec0a1062d3c9f22eb9d864bb\" returns successfully" Jan 30 15:39:41.439716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-377e5f728beccea83ab095d823f03a90e9fec725ec0a1062d3c9f22eb9d864bb-rootfs.mount: Deactivated successfully. Jan 30 15:39:41.486336 kubelet[2548]: I0130 15:39:41.486147 2548 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 15:39:41.805920 containerd[1459]: time="2025-01-30T15:39:41.805523781Z" level=info msg="shim disconnected" id=377e5f728beccea83ab095d823f03a90e9fec725ec0a1062d3c9f22eb9d864bb namespace=k8s.io Jan 30 15:39:41.811252 containerd[1459]: time="2025-01-30T15:39:41.806387002Z" level=warning msg="cleaning up after shim disconnected" id=377e5f728beccea83ab095d823f03a90e9fec725ec0a1062d3c9f22eb9d864bb namespace=k8s.io Jan 30 15:39:41.811252 containerd[1459]: time="2025-01-30T15:39:41.806434106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:39:41.835181 systemd[1]: Created slice kubepods-burstable-pod3d1aa63b_7d23_41a2_abda_93a9dc5d0357.slice - libcontainer container kubepods-burstable-pod3d1aa63b_7d23_41a2_abda_93a9dc5d0357.slice. Jan 30 15:39:41.863400 systemd[1]: Created slice kubepods-burstable-pod349b10d4_9660_42c3_b4c4_4be02b890682.slice - libcontainer container kubepods-burstable-pod349b10d4_9660_42c3_b4c4_4be02b890682.slice. Jan 30 15:39:41.896882 kubelet[2548]: I0130 15:39:41.896212 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/349b10d4-9660-42c3-b4c4-4be02b890682-config-volume\") pod \"coredns-6f6b679f8f-777rc\" (UID: \"349b10d4-9660-42c3-b4c4-4be02b890682\") " pod="kube-system/coredns-6f6b679f8f-777rc" Jan 30 15:39:41.897049 kubelet[2548]: I0130 15:39:41.896255 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx97k\" (UniqueName: \"kubernetes.io/projected/349b10d4-9660-42c3-b4c4-4be02b890682-kube-api-access-hx97k\") pod \"coredns-6f6b679f8f-777rc\" (UID: \"349b10d4-9660-42c3-b4c4-4be02b890682\") " pod="kube-system/coredns-6f6b679f8f-777rc" Jan 30 15:39:41.897049 kubelet[2548]: I0130 15:39:41.896937 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d1aa63b-7d23-41a2-abda-93a9dc5d0357-config-volume\") pod \"coredns-6f6b679f8f-kng66\" (UID: \"3d1aa63b-7d23-41a2-abda-93a9dc5d0357\") " pod="kube-system/coredns-6f6b679f8f-kng66" Jan 30 15:39:41.897049 kubelet[2548]: I0130 15:39:41.897009 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzmmg\" (UniqueName: \"kubernetes.io/projected/3d1aa63b-7d23-41a2-abda-93a9dc5d0357-kube-api-access-rzmmg\") pod \"coredns-6f6b679f8f-kng66\" (UID: \"3d1aa63b-7d23-41a2-abda-93a9dc5d0357\") " pod="kube-system/coredns-6f6b679f8f-kng66" Jan 30 15:39:41.897756 containerd[1459]: time="2025-01-30T15:39:41.897566762Z" level=info msg="CreateContainer within sandbox \"98028a16c541c260d92347552471a6cbc74f925321621e116fa34f81164b012a\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 30 15:39:41.924418 containerd[1459]: time="2025-01-30T15:39:41.924368761Z" level=info msg="CreateContainer within sandbox \"98028a16c541c260d92347552471a6cbc74f925321621e116fa34f81164b012a\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"615d2ec1bcfe9cb09b312841207ff4a0c75546e0beceb9ae02343539dc93f698\"" Jan 30 15:39:41.925371 containerd[1459]: time="2025-01-30T15:39:41.925338186Z" level=info msg="StartContainer for \"615d2ec1bcfe9cb09b312841207ff4a0c75546e0beceb9ae02343539dc93f698\"" Jan 30 15:39:41.956331 systemd[1]: Started cri-containerd-615d2ec1bcfe9cb09b312841207ff4a0c75546e0beceb9ae02343539dc93f698.scope - libcontainer container 615d2ec1bcfe9cb09b312841207ff4a0c75546e0beceb9ae02343539dc93f698. Jan 30 15:39:41.999901 containerd[1459]: time="2025-01-30T15:39:41.998898452Z" level=info msg="StartContainer for \"615d2ec1bcfe9cb09b312841207ff4a0c75546e0beceb9ae02343539dc93f698\" returns successfully" Jan 30 15:39:42.159307 containerd[1459]: time="2025-01-30T15:39:42.159187625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kng66,Uid:3d1aa63b-7d23-41a2-abda-93a9dc5d0357,Namespace:kube-system,Attempt:0,}" Jan 30 15:39:42.180083 containerd[1459]: time="2025-01-30T15:39:42.179650348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-777rc,Uid:349b10d4-9660-42c3-b4c4-4be02b890682,Namespace:kube-system,Attempt:0,}" Jan 30 15:39:42.227189 containerd[1459]: time="2025-01-30T15:39:42.226944307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kng66,Uid:3d1aa63b-7d23-41a2-abda-93a9dc5d0357,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"693c03adaa017c9781c52081afa6d982976361a61711afb6f1ab120b59741a8f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 15:39:42.227958 kubelet[2548]: E0130 15:39:42.227864 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"693c03adaa017c9781c52081afa6d982976361a61711afb6f1ab120b59741a8f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 15:39:42.228073 kubelet[2548]: E0130 15:39:42.227997 2548 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"693c03adaa017c9781c52081afa6d982976361a61711afb6f1ab120b59741a8f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-kng66" Jan 30 15:39:42.228073 kubelet[2548]: E0130 15:39:42.228044 2548 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"693c03adaa017c9781c52081afa6d982976361a61711afb6f1ab120b59741a8f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-kng66" Jan 30 15:39:42.229805 kubelet[2548]: E0130 15:39:42.228737 2548 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-kng66_kube-system(3d1aa63b-7d23-41a2-abda-93a9dc5d0357)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-kng66_kube-system(3d1aa63b-7d23-41a2-abda-93a9dc5d0357)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"693c03adaa017c9781c52081afa6d982976361a61711afb6f1ab120b59741a8f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-kng66" podUID="3d1aa63b-7d23-41a2-abda-93a9dc5d0357" Jan 30 15:39:42.253043 containerd[1459]: time="2025-01-30T15:39:42.252946858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-777rc,Uid:349b10d4-9660-42c3-b4c4-4be02b890682,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45fc7ca4fa0195e1581946afb8ebd1966cddab25baeae825d631081edb0c5d25\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 15:39:42.253423 kubelet[2548]: E0130 15:39:42.253362 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45fc7ca4fa0195e1581946afb8ebd1966cddab25baeae825d631081edb0c5d25\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 15:39:42.253526 kubelet[2548]: E0130 15:39:42.253465 2548 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45fc7ca4fa0195e1581946afb8ebd1966cddab25baeae825d631081edb0c5d25\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-777rc" Jan 30 15:39:42.253526 kubelet[2548]: E0130 15:39:42.253510 2548 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45fc7ca4fa0195e1581946afb8ebd1966cddab25baeae825d631081edb0c5d25\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-777rc" Jan 30 15:39:42.253665 kubelet[2548]: E0130 15:39:42.253595 2548 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-777rc_kube-system(349b10d4-9660-42c3-b4c4-4be02b890682)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-777rc_kube-system(349b10d4-9660-42c3-b4c4-4be02b890682)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45fc7ca4fa0195e1581946afb8ebd1966cddab25baeae825d631081edb0c5d25\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-777rc" podUID="349b10d4-9660-42c3-b4c4-4be02b890682" Jan 30 15:39:42.924153 kubelet[2548]: I0130 15:39:42.922576 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-bcmdt" podStartSLOduration=2.398328657 podStartE2EDuration="8.922544998s" podCreationTimestamp="2025-01-30 15:39:34 +0000 UTC" firstStartedPulling="2025-01-30 15:39:34.791841516 +0000 UTC m=+5.086807339" lastFinishedPulling="2025-01-30 15:39:41.316057797 +0000 UTC m=+11.611023680" observedRunningTime="2025-01-30 15:39:42.921391868 +0000 UTC m=+13.216357721" watchObservedRunningTime="2025-01-30 15:39:42.922544998 +0000 UTC m=+13.217510851" Jan 30 15:39:43.104096 systemd-networkd[1377]: flannel.1: Link UP Jan 30 15:39:43.104158 systemd-networkd[1377]: flannel.1: Gained carrier Jan 30 15:39:44.567403 systemd-networkd[1377]: flannel.1: Gained IPv6LL Jan 30 15:39:54.810366 containerd[1459]: time="2025-01-30T15:39:54.810212945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-777rc,Uid:349b10d4-9660-42c3-b4c4-4be02b890682,Namespace:kube-system,Attempt:0,}" Jan 30 15:39:54.861491 systemd-networkd[1377]: cni0: Link UP Jan 30 15:39:54.861512 systemd-networkd[1377]: cni0: Gained carrier Jan 30 15:39:54.873650 systemd-networkd[1377]: cni0: Lost carrier Jan 30 15:39:54.878995 systemd-networkd[1377]: veth588f70af: Link UP Jan 30 15:39:54.891183 kernel: cni0: port 1(veth588f70af) entered blocking state Jan 30 15:39:54.891331 kernel: cni0: port 1(veth588f70af) entered disabled state Jan 30 15:39:54.900139 kernel: veth588f70af: entered allmulticast mode Jan 30 15:39:54.902128 kernel: veth588f70af: entered promiscuous mode Jan 30 15:39:54.906442 kernel: cni0: port 1(veth588f70af) entered blocking state Jan 30 15:39:54.907046 kernel: cni0: port 1(veth588f70af) entered forwarding state Jan 30 15:39:54.907392 kernel: cni0: port 1(veth588f70af) entered disabled state Jan 30 15:39:54.922609 kernel: cni0: port 1(veth588f70af) entered blocking state Jan 30 15:39:54.922720 kernel: cni0: port 1(veth588f70af) entered forwarding state Jan 30 15:39:54.922901 systemd-networkd[1377]: veth588f70af: Gained carrier Jan 30 15:39:54.923632 systemd-networkd[1377]: cni0: Gained carrier Jan 30 15:39:54.925366 containerd[1459]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 30 15:39:54.925366 containerd[1459]: delegateAdd: netconf sent to delegate plugin: Jan 30 15:39:54.944573 containerd[1459]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T15:39:54.944254048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:39:54.944573 containerd[1459]: time="2025-01-30T15:39:54.944312354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:39:54.944573 containerd[1459]: time="2025-01-30T15:39:54.944332083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:54.944573 containerd[1459]: time="2025-01-30T15:39:54.944419937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:54.965514 systemd[1]: run-containerd-runc-k8s.io-6e341bd5031852b7f7a0d7964f990020184441ac1e3c0d5f92824e068fec9e0c-runc.a58BSi.mount: Deactivated successfully. Jan 30 15:39:54.975239 systemd[1]: Started cri-containerd-6e341bd5031852b7f7a0d7964f990020184441ac1e3c0d5f92824e068fec9e0c.scope - libcontainer container 6e341bd5031852b7f7a0d7964f990020184441ac1e3c0d5f92824e068fec9e0c. Jan 30 15:39:55.018905 containerd[1459]: time="2025-01-30T15:39:55.018849017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-777rc,Uid:349b10d4-9660-42c3-b4c4-4be02b890682,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e341bd5031852b7f7a0d7964f990020184441ac1e3c0d5f92824e068fec9e0c\"" Jan 30 15:39:55.023950 containerd[1459]: time="2025-01-30T15:39:55.023322708Z" level=info msg="CreateContainer within sandbox \"6e341bd5031852b7f7a0d7964f990020184441ac1e3c0d5f92824e068fec9e0c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:39:55.040363 containerd[1459]: time="2025-01-30T15:39:55.040269537Z" level=info msg="CreateContainer within sandbox \"6e341bd5031852b7f7a0d7964f990020184441ac1e3c0d5f92824e068fec9e0c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c34ac45f863c6f22d95e498d74274ce9f12a872664741c5ce26105b1c3d01a36\"" Jan 30 15:39:55.042396 containerd[1459]: time="2025-01-30T15:39:55.041152175Z" level=info msg="StartContainer for \"c34ac45f863c6f22d95e498d74274ce9f12a872664741c5ce26105b1c3d01a36\"" Jan 30 15:39:55.070248 systemd[1]: Started cri-containerd-c34ac45f863c6f22d95e498d74274ce9f12a872664741c5ce26105b1c3d01a36.scope - libcontainer container c34ac45f863c6f22d95e498d74274ce9f12a872664741c5ce26105b1c3d01a36. Jan 30 15:39:55.101751 containerd[1459]: time="2025-01-30T15:39:55.101556643Z" level=info msg="StartContainer for \"c34ac45f863c6f22d95e498d74274ce9f12a872664741c5ce26105b1c3d01a36\" returns successfully" Jan 30 15:39:55.961546 kubelet[2548]: I0130 15:39:55.961367 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-777rc" podStartSLOduration=21.961329857 podStartE2EDuration="21.961329857s" podCreationTimestamp="2025-01-30 15:39:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:39:55.9599987 +0000 UTC m=+26.254964574" watchObservedRunningTime="2025-01-30 15:39:55.961329857 +0000 UTC m=+26.256295730" Jan 30 15:39:56.471488 systemd-networkd[1377]: veth588f70af: Gained IPv6LL Jan 30 15:39:56.791442 systemd-networkd[1377]: cni0: Gained IPv6LL Jan 30 15:39:56.811362 containerd[1459]: time="2025-01-30T15:39:56.811289177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kng66,Uid:3d1aa63b-7d23-41a2-abda-93a9dc5d0357,Namespace:kube-system,Attempt:0,}" Jan 30 15:39:56.869430 systemd-networkd[1377]: vethc3537832: Link UP Jan 30 15:39:56.874818 kernel: cni0: port 2(vethc3537832) entered blocking state Jan 30 15:39:56.874917 kernel: cni0: port 2(vethc3537832) entered disabled state Jan 30 15:39:56.874960 kernel: vethc3537832: entered allmulticast mode Jan 30 15:39:56.879148 kernel: vethc3537832: entered promiscuous mode Jan 30 15:39:56.900270 kernel: cni0: port 2(vethc3537832) entered blocking state Jan 30 15:39:56.900408 kernel: cni0: port 2(vethc3537832) entered forwarding state Jan 30 15:39:56.900819 systemd-networkd[1377]: vethc3537832: Gained carrier Jan 30 15:39:56.905018 containerd[1459]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Jan 30 15:39:56.905018 containerd[1459]: delegateAdd: netconf sent to delegate plugin: Jan 30 15:39:56.926529 containerd[1459]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T15:39:56.926232072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:39:56.928267 containerd[1459]: time="2025-01-30T15:39:56.928138035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:39:56.928267 containerd[1459]: time="2025-01-30T15:39:56.928168546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:56.928392 containerd[1459]: time="2025-01-30T15:39:56.928329323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:39:56.949344 systemd[1]: run-containerd-runc-k8s.io-2d78176d8c66dbd97778c4eb25f10140177cd7458f71d9c583804bd40c3ba514-runc.zs4VAi.mount: Deactivated successfully. Jan 30 15:39:56.956251 systemd[1]: Started cri-containerd-2d78176d8c66dbd97778c4eb25f10140177cd7458f71d9c583804bd40c3ba514.scope - libcontainer container 2d78176d8c66dbd97778c4eb25f10140177cd7458f71d9c583804bd40c3ba514. Jan 30 15:39:56.992356 containerd[1459]: time="2025-01-30T15:39:56.992297776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kng66,Uid:3d1aa63b-7d23-41a2-abda-93a9dc5d0357,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d78176d8c66dbd97778c4eb25f10140177cd7458f71d9c583804bd40c3ba514\"" Jan 30 15:39:56.996899 containerd[1459]: time="2025-01-30T15:39:56.996365888Z" level=info msg="CreateContainer within sandbox \"2d78176d8c66dbd97778c4eb25f10140177cd7458f71d9c583804bd40c3ba514\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:39:57.020045 containerd[1459]: time="2025-01-30T15:39:57.019980601Z" level=info msg="CreateContainer within sandbox \"2d78176d8c66dbd97778c4eb25f10140177cd7458f71d9c583804bd40c3ba514\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"47c7afc0fb838aef3601f2c07b950d0d6a812a21aeafd2be9f06644419e6c993\"" Jan 30 15:39:57.020797 containerd[1459]: time="2025-01-30T15:39:57.020760523Z" level=info msg="StartContainer for \"47c7afc0fb838aef3601f2c07b950d0d6a812a21aeafd2be9f06644419e6c993\"" Jan 30 15:39:57.048321 systemd[1]: Started cri-containerd-47c7afc0fb838aef3601f2c07b950d0d6a812a21aeafd2be9f06644419e6c993.scope - libcontainer container 47c7afc0fb838aef3601f2c07b950d0d6a812a21aeafd2be9f06644419e6c993. Jan 30 15:39:57.079440 containerd[1459]: time="2025-01-30T15:39:57.079158477Z" level=info msg="StartContainer for \"47c7afc0fb838aef3601f2c07b950d0d6a812a21aeafd2be9f06644419e6c993\" returns successfully" Jan 30 15:39:58.005181 kubelet[2548]: I0130 15:39:58.004841 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kng66" podStartSLOduration=24.004350793 podStartE2EDuration="24.004350793s" podCreationTimestamp="2025-01-30 15:39:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:39:57.977511883 +0000 UTC m=+28.272477776" watchObservedRunningTime="2025-01-30 15:39:58.004350793 +0000 UTC m=+28.299316666" Jan 30 15:39:58.327626 systemd-networkd[1377]: vethc3537832: Gained IPv6LL Jan 30 15:40:36.283643 systemd[1]: Started sshd@7-172.24.4.191:22-172.24.4.1:42674.service - OpenSSH per-connection server daemon (172.24.4.1:42674). Jan 30 15:40:37.517556 sshd[3614]: Accepted publickey for core from 172.24.4.1 port 42674 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:40:37.520349 sshd[3614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:40:37.529783 systemd-logind[1436]: New session 10 of user core. Jan 30 15:40:37.542456 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 15:40:38.241017 sshd[3614]: pam_unix(sshd:session): session closed for user core Jan 30 15:40:38.247341 systemd-logind[1436]: Session 10 logged out. Waiting for processes to exit. Jan 30 15:40:38.247655 systemd[1]: sshd@7-172.24.4.191:22-172.24.4.1:42674.service: Deactivated successfully. Jan 30 15:40:38.250821 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 15:40:38.256359 systemd-logind[1436]: Removed session 10. Jan 30 15:40:43.264629 systemd[1]: Started sshd@8-172.24.4.191:22-172.24.4.1:42690.service - OpenSSH per-connection server daemon (172.24.4.1:42690). Jan 30 15:40:44.641448 sshd[3650]: Accepted publickey for core from 172.24.4.1 port 42690 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:40:44.644234 sshd[3650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:40:44.654848 systemd-logind[1436]: New session 11 of user core. Jan 30 15:40:44.661431 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 15:40:45.434988 sshd[3650]: pam_unix(sshd:session): session closed for user core Jan 30 15:40:45.442533 systemd[1]: sshd@8-172.24.4.191:22-172.24.4.1:42690.service: Deactivated successfully. Jan 30 15:40:45.446480 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 15:40:45.448833 systemd-logind[1436]: Session 11 logged out. Waiting for processes to exit. Jan 30 15:40:45.451432 systemd-logind[1436]: Removed session 11. Jan 30 15:40:50.460720 systemd[1]: Started sshd@9-172.24.4.191:22-172.24.4.1:56784.service - OpenSSH per-connection server daemon (172.24.4.1:56784). Jan 30 15:40:51.599763 sshd[3705]: Accepted publickey for core from 172.24.4.1 port 56784 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:40:51.602393 sshd[3705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:40:51.613576 systemd-logind[1436]: New session 12 of user core. Jan 30 15:40:51.620456 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 15:40:52.336089 sshd[3705]: pam_unix(sshd:session): session closed for user core Jan 30 15:40:52.347630 systemd[1]: sshd@9-172.24.4.191:22-172.24.4.1:56784.service: Deactivated successfully. Jan 30 15:40:52.351277 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 15:40:52.354958 systemd-logind[1436]: Session 12 logged out. Waiting for processes to exit. Jan 30 15:40:52.368532 systemd[1]: Started sshd@10-172.24.4.191:22-172.24.4.1:56792.service - OpenSSH per-connection server daemon (172.24.4.1:56792). Jan 30 15:40:52.373337 systemd-logind[1436]: Removed session 12. Jan 30 15:40:53.606378 sshd[3719]: Accepted publickey for core from 172.24.4.1 port 56792 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:40:53.609039 sshd[3719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:40:53.620167 systemd-logind[1436]: New session 13 of user core. Jan 30 15:40:53.625460 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 15:40:54.475564 sshd[3719]: pam_unix(sshd:session): session closed for user core Jan 30 15:40:54.486477 systemd[1]: sshd@10-172.24.4.191:22-172.24.4.1:56792.service: Deactivated successfully. Jan 30 15:40:54.489897 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 15:40:54.492056 systemd-logind[1436]: Session 13 logged out. Waiting for processes to exit. Jan 30 15:40:54.495909 systemd-logind[1436]: Removed session 13. Jan 30 15:40:54.506571 systemd[1]: Started sshd@11-172.24.4.191:22-172.24.4.1:35542.service - OpenSSH per-connection server daemon (172.24.4.1:35542). Jan 30 15:40:55.670568 sshd[3751]: Accepted publickey for core from 172.24.4.1 port 35542 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:40:55.674465 sshd[3751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:40:55.685631 systemd-logind[1436]: New session 14 of user core. Jan 30 15:40:55.701911 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 15:40:56.451911 sshd[3751]: pam_unix(sshd:session): session closed for user core Jan 30 15:40:56.462324 systemd[1]: sshd@11-172.24.4.191:22-172.24.4.1:35542.service: Deactivated successfully. Jan 30 15:40:56.466552 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 15:40:56.469813 systemd-logind[1436]: Session 14 logged out. Waiting for processes to exit. Jan 30 15:40:56.472678 systemd-logind[1436]: Removed session 14. Jan 30 15:41:01.480960 systemd[1]: Started sshd@12-172.24.4.191:22-172.24.4.1:35554.service - OpenSSH per-connection server daemon (172.24.4.1:35554). Jan 30 15:41:02.663518 sshd[3785]: Accepted publickey for core from 172.24.4.1 port 35554 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:41:02.666299 sshd[3785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:41:02.678059 systemd-logind[1436]: New session 15 of user core. Jan 30 15:41:02.690654 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 15:41:03.441751 sshd[3785]: pam_unix(sshd:session): session closed for user core Jan 30 15:41:03.452041 systemd[1]: sshd@12-172.24.4.191:22-172.24.4.1:35554.service: Deactivated successfully. Jan 30 15:41:03.456840 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 15:41:03.460255 systemd-logind[1436]: Session 15 logged out. Waiting for processes to exit. Jan 30 15:41:03.468789 systemd[1]: Started sshd@13-172.24.4.191:22-172.24.4.1:35566.service - OpenSSH per-connection server daemon (172.24.4.1:35566). Jan 30 15:41:03.472137 systemd-logind[1436]: Removed session 15. Jan 30 15:41:04.693029 sshd[3797]: Accepted publickey for core from 172.24.4.1 port 35566 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:41:04.696859 sshd[3797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:41:04.708272 systemd-logind[1436]: New session 16 of user core. Jan 30 15:41:04.718953 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 15:41:05.471300 sshd[3797]: pam_unix(sshd:session): session closed for user core Jan 30 15:41:05.481297 systemd[1]: sshd@13-172.24.4.191:22-172.24.4.1:35566.service: Deactivated successfully. Jan 30 15:41:05.484306 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 15:41:05.487490 systemd-logind[1436]: Session 16 logged out. Waiting for processes to exit. Jan 30 15:41:05.495768 systemd[1]: Started sshd@14-172.24.4.191:22-172.24.4.1:58920.service - OpenSSH per-connection server daemon (172.24.4.1:58920). Jan 30 15:41:05.500255 systemd-logind[1436]: Removed session 16. Jan 30 15:41:06.759801 sshd[3829]: Accepted publickey for core from 172.24.4.1 port 58920 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:41:06.763199 sshd[3829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:41:06.772287 systemd-logind[1436]: New session 17 of user core. Jan 30 15:41:06.779466 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 15:41:09.241351 sshd[3829]: pam_unix(sshd:session): session closed for user core Jan 30 15:41:09.253942 systemd[1]: sshd@14-172.24.4.191:22-172.24.4.1:58920.service: Deactivated successfully. Jan 30 15:41:09.258782 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 15:41:09.261022 systemd-logind[1436]: Session 17 logged out. Waiting for processes to exit. Jan 30 15:41:09.270770 systemd[1]: Started sshd@15-172.24.4.191:22-172.24.4.1:58934.service - OpenSSH per-connection server daemon (172.24.4.1:58934). Jan 30 15:41:09.274229 systemd-logind[1436]: Removed session 17. Jan 30 15:41:10.351588 sshd[3870]: Accepted publickey for core from 172.24.4.1 port 58934 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:41:10.355707 sshd[3870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:41:10.366923 systemd-logind[1436]: New session 18 of user core. Jan 30 15:41:10.374440 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 15:41:11.607451 sshd[3870]: pam_unix(sshd:session): session closed for user core Jan 30 15:41:11.620328 systemd[1]: sshd@15-172.24.4.191:22-172.24.4.1:58934.service: Deactivated successfully. Jan 30 15:41:11.625710 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 15:41:11.630277 systemd-logind[1436]: Session 18 logged out. Waiting for processes to exit. Jan 30 15:41:11.639752 systemd[1]: Started sshd@16-172.24.4.191:22-172.24.4.1:58946.service - OpenSSH per-connection server daemon (172.24.4.1:58946). Jan 30 15:41:11.642952 systemd-logind[1436]: Removed session 18. Jan 30 15:41:12.813481 sshd[3881]: Accepted publickey for core from 172.24.4.1 port 58946 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:41:12.816291 sshd[3881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:41:12.826164 systemd-logind[1436]: New session 19 of user core. Jan 30 15:41:12.833590 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 15:41:13.421663 sshd[3881]: pam_unix(sshd:session): session closed for user core Jan 30 15:41:13.427669 systemd[1]: sshd@16-172.24.4.191:22-172.24.4.1:58946.service: Deactivated successfully. Jan 30 15:41:13.432958 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 15:41:13.437993 systemd-logind[1436]: Session 19 logged out. Waiting for processes to exit. Jan 30 15:41:13.440629 systemd-logind[1436]: Removed session 19. Jan 30 15:41:18.442723 systemd[1]: Started sshd@17-172.24.4.191:22-172.24.4.1:34316.service - OpenSSH per-connection server daemon (172.24.4.1:34316). Jan 30 15:41:19.629548 sshd[3918]: Accepted publickey for core from 172.24.4.1 port 34316 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:41:19.632353 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:41:19.641706 systemd-logind[1436]: New session 20 of user core. Jan 30 15:41:19.651444 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 15:41:20.365499 sshd[3918]: pam_unix(sshd:session): session closed for user core Jan 30 15:41:20.371216 systemd[1]: sshd@17-172.24.4.191:22-172.24.4.1:34316.service: Deactivated successfully. Jan 30 15:41:20.375519 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 15:41:20.379088 systemd-logind[1436]: Session 20 logged out. Waiting for processes to exit. Jan 30 15:41:20.381309 systemd-logind[1436]: Removed session 20. Jan 30 15:41:25.386747 systemd[1]: Started sshd@18-172.24.4.191:22-172.24.4.1:60768.service - OpenSSH per-connection server daemon (172.24.4.1:60768). Jan 30 15:41:26.532427 sshd[3972]: Accepted publickey for core from 172.24.4.1 port 60768 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:41:26.535166 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:41:26.546273 systemd-logind[1436]: New session 21 of user core. Jan 30 15:41:26.551902 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 15:41:27.266792 sshd[3972]: pam_unix(sshd:session): session closed for user core Jan 30 15:41:27.272535 systemd[1]: sshd@18-172.24.4.191:22-172.24.4.1:60768.service: Deactivated successfully. Jan 30 15:41:27.276938 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 15:41:27.281880 systemd-logind[1436]: Session 21 logged out. Waiting for processes to exit. Jan 30 15:41:27.284739 systemd-logind[1436]: Removed session 21. Jan 30 15:41:32.286841 systemd[1]: Started sshd@19-172.24.4.191:22-172.24.4.1:60780.service - OpenSSH per-connection server daemon (172.24.4.1:60780). Jan 30 15:41:33.407545 sshd[4009]: Accepted publickey for core from 172.24.4.1 port 60780 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:41:33.410393 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:41:33.422682 systemd-logind[1436]: New session 22 of user core. Jan 30 15:41:33.432484 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 15:41:34.161223 sshd[4009]: pam_unix(sshd:session): session closed for user core Jan 30 15:41:34.169439 systemd-logind[1436]: Session 22 logged out. Waiting for processes to exit. Jan 30 15:41:34.171427 systemd[1]: sshd@19-172.24.4.191:22-172.24.4.1:60780.service: Deactivated successfully. Jan 30 15:41:34.180510 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 15:41:34.184586 systemd-logind[1436]: Removed session 22.