Jan 29 12:07:01.099636 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 12:07:01.099659 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 12:07:01.099669 kernel: BIOS-provided physical RAM map: Jan 29 12:07:01.099676 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 12:07:01.099683 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 12:07:01.099692 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 12:07:01.099701 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 29 12:07:01.099708 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 29 12:07:01.099716 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 12:07:01.099723 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 12:07:01.099730 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 29 12:07:01.099737 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 12:07:01.099744 kernel: NX (Execute Disable) protection: active Jan 29 12:07:01.099753 kernel: APIC: Static calls initialized Jan 29 12:07:01.099762 kernel: SMBIOS 3.0.0 present. Jan 29 12:07:01.099770 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 29 12:07:01.099778 kernel: Hypervisor detected: KVM Jan 29 12:07:01.099785 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:07:01.099793 kernel: kvm-clock: using sched offset of 3421843844 cycles Jan 29 12:07:01.099802 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:07:01.099810 kernel: tsc: Detected 1996.249 MHz processor Jan 29 12:07:01.099818 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:07:01.099827 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:07:01.099834 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 29 12:07:01.099843 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 12:07:01.099850 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:07:01.099858 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 29 12:07:01.099866 kernel: ACPI: Early table checksum verification disabled Jan 29 12:07:01.099875 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 29 12:07:01.099883 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:07:01.099891 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:07:01.099899 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:07:01.099906 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 29 12:07:01.099914 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:07:01.099922 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:07:01.099930 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 29 12:07:01.099939 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 29 12:07:01.099947 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 29 12:07:01.099954 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 29 12:07:01.099963 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 29 12:07:01.099973 kernel: No NUMA configuration found Jan 29 12:07:01.099981 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 29 12:07:01.099990 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 29 12:07:01.099999 kernel: Zone ranges: Jan 29 12:07:01.100007 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:07:01.100015 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:07:01.100024 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 29 12:07:01.100032 kernel: Movable zone start for each node Jan 29 12:07:01.100040 kernel: Early memory node ranges Jan 29 12:07:01.100048 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 12:07:01.100056 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 29 12:07:01.100065 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 29 12:07:01.100073 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 29 12:07:01.100082 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:07:01.100090 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 12:07:01.100098 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 29 12:07:01.100106 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 12:07:01.100133 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:07:01.100141 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:07:01.100149 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 12:07:01.100160 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:07:01.100168 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:07:01.100176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:07:01.100184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:07:01.100192 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:07:01.100200 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:07:01.100208 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 12:07:01.100216 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 29 12:07:01.100225 kernel: Booting paravirtualized kernel on KVM Jan 29 12:07:01.100235 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:07:01.100243 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:07:01.100251 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:07:01.100259 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:07:01.100267 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:07:01.100275 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 12:07:01.100285 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 12:07:01.100293 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:07:01.100303 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:07:01.100312 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:07:01.100320 kernel: Fallback order for Node 0: 0 Jan 29 12:07:01.100328 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 29 12:07:01.100336 kernel: Policy zone: Normal Jan 29 12:07:01.100344 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:07:01.100352 kernel: software IO TLB: area num 2. Jan 29 12:07:01.100361 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 227308K reserved, 0K cma-reserved) Jan 29 12:07:01.100369 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:07:01.100379 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 12:07:01.100387 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:07:01.100395 kernel: Dynamic Preempt: voluntary Jan 29 12:07:01.100403 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:07:01.100412 kernel: rcu: RCU event tracing is enabled. Jan 29 12:07:01.100420 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:07:01.100429 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:07:01.100437 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:07:01.100445 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:07:01.100455 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:07:01.100463 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:07:01.100471 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 12:07:01.100479 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:07:01.100488 kernel: Console: colour VGA+ 80x25 Jan 29 12:07:01.100496 kernel: printk: console [tty0] enabled Jan 29 12:07:01.100504 kernel: printk: console [ttyS0] enabled Jan 29 12:07:01.100512 kernel: ACPI: Core revision 20230628 Jan 29 12:07:01.100520 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:07:01.100528 kernel: x2apic enabled Jan 29 12:07:01.100538 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:07:01.100546 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 12:07:01.100554 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 12:07:01.100562 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 29 12:07:01.100570 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 12:07:01.100578 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 12:07:01.100587 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:07:01.100595 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:07:01.100603 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:07:01.100613 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:07:01.100621 kernel: Speculative Store Bypass: Vulnerable Jan 29 12:07:01.100629 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 29 12:07:01.100637 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:07:01.100651 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:07:01.100661 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:07:01.100670 kernel: landlock: Up and running. Jan 29 12:07:01.100678 kernel: SELinux: Initializing. Jan 29 12:07:01.100687 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:07:01.100696 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:07:01.100704 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 29 12:07:01.100715 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:07:01.100724 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:07:01.100733 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:07:01.100741 kernel: Performance Events: AMD PMU driver. Jan 29 12:07:01.100750 kernel: ... version: 0 Jan 29 12:07:01.100760 kernel: ... bit width: 48 Jan 29 12:07:01.100769 kernel: ... generic registers: 4 Jan 29 12:07:01.100777 kernel: ... value mask: 0000ffffffffffff Jan 29 12:07:01.100786 kernel: ... max period: 00007fffffffffff Jan 29 12:07:01.100794 kernel: ... fixed-purpose events: 0 Jan 29 12:07:01.100803 kernel: ... event mask: 000000000000000f Jan 29 12:07:01.100811 kernel: signal: max sigframe size: 1440 Jan 29 12:07:01.100820 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:07:01.100828 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:07:01.100838 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:07:01.100847 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:07:01.100855 kernel: .... node #0, CPUs: #1 Jan 29 12:07:01.100864 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:07:01.100872 kernel: smpboot: Max logical packages: 2 Jan 29 12:07:01.100881 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 29 12:07:01.100889 kernel: devtmpfs: initialized Jan 29 12:07:01.100898 kernel: x86/mm: Memory block size: 128MB Jan 29 12:07:01.100907 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:07:01.100917 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:07:01.100926 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:07:01.100934 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:07:01.100943 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:07:01.100951 kernel: audit: type=2000 audit(1738152420.375:1): state=initialized audit_enabled=0 res=1 Jan 29 12:07:01.100960 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:07:01.100968 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:07:01.100977 kernel: cpuidle: using governor menu Jan 29 12:07:01.100985 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:07:01.100995 kernel: dca service started, version 1.12.1 Jan 29 12:07:01.101004 kernel: PCI: Using configuration type 1 for base access Jan 29 12:07:01.101013 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:07:01.101021 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:07:01.101030 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:07:01.101038 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:07:01.101047 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:07:01.101055 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:07:01.101064 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:07:01.101075 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:07:01.101083 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:07:01.101092 kernel: ACPI: Interpreter enabled Jan 29 12:07:01.101100 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 12:07:01.102213 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:07:01.102227 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:07:01.102236 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:07:01.102245 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 12:07:01.102254 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:07:01.102385 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:07:01.102487 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 12:07:01.102580 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 12:07:01.102594 kernel: acpiphp: Slot [3] registered Jan 29 12:07:01.102603 kernel: acpiphp: Slot [4] registered Jan 29 12:07:01.102612 kernel: acpiphp: Slot [5] registered Jan 29 12:07:01.102620 kernel: acpiphp: Slot [6] registered Jan 29 12:07:01.102629 kernel: acpiphp: Slot [7] registered Jan 29 12:07:01.102641 kernel: acpiphp: Slot [8] registered Jan 29 12:07:01.102649 kernel: acpiphp: Slot [9] registered Jan 29 12:07:01.102657 kernel: acpiphp: Slot [10] registered Jan 29 12:07:01.102666 kernel: acpiphp: Slot [11] registered Jan 29 12:07:01.102674 kernel: acpiphp: Slot [12] registered Jan 29 12:07:01.102683 kernel: acpiphp: Slot [13] registered Jan 29 12:07:01.102691 kernel: acpiphp: Slot [14] registered Jan 29 12:07:01.102700 kernel: acpiphp: Slot [15] registered Jan 29 12:07:01.102708 kernel: acpiphp: Slot [16] registered Jan 29 12:07:01.102718 kernel: acpiphp: Slot [17] registered Jan 29 12:07:01.102726 kernel: acpiphp: Slot [18] registered Jan 29 12:07:01.102735 kernel: acpiphp: Slot [19] registered Jan 29 12:07:01.102743 kernel: acpiphp: Slot [20] registered Jan 29 12:07:01.102751 kernel: acpiphp: Slot [21] registered Jan 29 12:07:01.102760 kernel: acpiphp: Slot [22] registered Jan 29 12:07:01.102768 kernel: acpiphp: Slot [23] registered Jan 29 12:07:01.102776 kernel: acpiphp: Slot [24] registered Jan 29 12:07:01.102785 kernel: acpiphp: Slot [25] registered Jan 29 12:07:01.102793 kernel: acpiphp: Slot [26] registered Jan 29 12:07:01.102803 kernel: acpiphp: Slot [27] registered Jan 29 12:07:01.102812 kernel: acpiphp: Slot [28] registered Jan 29 12:07:01.102820 kernel: acpiphp: Slot [29] registered Jan 29 12:07:01.102828 kernel: acpiphp: Slot [30] registered Jan 29 12:07:01.102836 kernel: acpiphp: Slot [31] registered Jan 29 12:07:01.102845 kernel: PCI host bridge to bus 0000:00 Jan 29 12:07:01.102935 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:07:01.103018 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:07:01.103105 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:07:01.103213 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 12:07:01.103293 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 29 12:07:01.103373 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:07:01.103477 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 12:07:01.103576 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 12:07:01.103680 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 12:07:01.103771 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 29 12:07:01.103861 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 12:07:01.103950 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 12:07:01.104040 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 12:07:01.106770 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 12:07:01.106880 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 12:07:01.106977 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 12:07:01.107067 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 12:07:01.107216 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 12:07:01.107310 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 12:07:01.107400 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 29 12:07:01.107490 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 29 12:07:01.107579 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 29 12:07:01.107673 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:07:01.107770 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 12:07:01.107861 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 29 12:07:01.107951 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 29 12:07:01.108040 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 29 12:07:01.108158 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 29 12:07:01.108352 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 12:07:01.108504 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 12:07:01.108612 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 29 12:07:01.108705 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 29 12:07:01.108803 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 12:07:01.108926 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 29 12:07:01.109022 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 29 12:07:01.109140 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:07:01.109241 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 29 12:07:01.109331 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 29 12:07:01.109421 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 29 12:07:01.109434 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:07:01.109443 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:07:01.109452 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:07:01.109461 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:07:01.109473 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 12:07:01.109481 kernel: iommu: Default domain type: Translated Jan 29 12:07:01.109490 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:07:01.109498 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:07:01.109507 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:07:01.109515 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 12:07:01.109524 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 29 12:07:01.109614 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 12:07:01.109704 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 12:07:01.109800 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:07:01.109814 kernel: vgaarb: loaded Jan 29 12:07:01.109823 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:07:01.109831 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:07:01.109840 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:07:01.109849 kernel: pnp: PnP ACPI init Jan 29 12:07:01.109954 kernel: pnp 00:03: [dma 2] Jan 29 12:07:01.109969 kernel: pnp: PnP ACPI: found 5 devices Jan 29 12:07:01.109978 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:07:01.109991 kernel: NET: Registered PF_INET protocol family Jan 29 12:07:01.109999 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:07:01.110008 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:07:01.110017 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:07:01.110026 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:07:01.110035 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:07:01.110043 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:07:01.110052 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:07:01.110062 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:07:01.110071 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:07:01.110080 kernel: NET: Registered PF_XDP protocol family Jan 29 12:07:01.110187 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:07:01.110273 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:07:01.110357 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:07:01.110441 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 29 12:07:01.110525 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 29 12:07:01.110624 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 12:07:01.110739 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 12:07:01.110754 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:07:01.110764 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:07:01.110773 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 29 12:07:01.110783 kernel: Initialise system trusted keyrings Jan 29 12:07:01.110793 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:07:01.110802 kernel: Key type asymmetric registered Jan 29 12:07:01.110811 kernel: Asymmetric key parser 'x509' registered Jan 29 12:07:01.110824 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:07:01.110833 kernel: io scheduler mq-deadline registered Jan 29 12:07:01.110842 kernel: io scheduler kyber registered Jan 29 12:07:01.110852 kernel: io scheduler bfq registered Jan 29 12:07:01.110861 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:07:01.110871 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 12:07:01.110881 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 12:07:01.110890 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 12:07:01.110899 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 12:07:01.110911 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:07:01.110921 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:07:01.110930 kernel: random: crng init done Jan 29 12:07:01.110939 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:07:01.110949 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:07:01.110958 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:07:01.111060 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 12:07:01.111075 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:07:01.111189 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 12:07:01.111281 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T12:07:00 UTC (1738152420) Jan 29 12:07:01.111363 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 12:07:01.111376 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 12:07:01.111385 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:07:01.111393 kernel: Segment Routing with IPv6 Jan 29 12:07:01.111402 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:07:01.111411 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:07:01.111420 kernel: Key type dns_resolver registered Jan 29 12:07:01.111432 kernel: IPI shorthand broadcast: enabled Jan 29 12:07:01.111440 kernel: sched_clock: Marking stable (996007397, 172646336)->(1200263715, -31609982) Jan 29 12:07:01.111449 kernel: registered taskstats version 1 Jan 29 12:07:01.111458 kernel: Loading compiled-in X.509 certificates Jan 29 12:07:01.111468 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 12:07:01.111478 kernel: Key type .fscrypt registered Jan 29 12:07:01.111487 kernel: Key type fscrypt-provisioning registered Jan 29 12:07:01.111497 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:07:01.111508 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:07:01.111518 kernel: ima: No architecture policies found Jan 29 12:07:01.111527 kernel: clk: Disabling unused clocks Jan 29 12:07:01.111536 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 12:07:01.111545 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:07:01.111555 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 12:07:01.111564 kernel: Run /init as init process Jan 29 12:07:01.111573 kernel: with arguments: Jan 29 12:07:01.111582 kernel: /init Jan 29 12:07:01.111591 kernel: with environment: Jan 29 12:07:01.111602 kernel: HOME=/ Jan 29 12:07:01.111611 kernel: TERM=linux Jan 29 12:07:01.111620 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:07:01.111632 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:07:01.111644 systemd[1]: Detected virtualization kvm. Jan 29 12:07:01.111655 systemd[1]: Detected architecture x86-64. Jan 29 12:07:01.111665 systemd[1]: Running in initrd. Jan 29 12:07:01.111677 systemd[1]: No hostname configured, using default hostname. Jan 29 12:07:01.111687 systemd[1]: Hostname set to . Jan 29 12:07:01.111697 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:07:01.111707 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:07:01.111717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:07:01.111727 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:07:01.111738 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:07:01.111757 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:07:01.111769 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:07:01.111780 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:07:01.111792 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:07:01.111802 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:07:01.111815 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:07:01.111825 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:07:01.111836 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:07:01.111846 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:07:01.111856 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:07:01.111867 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:07:01.111877 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:07:01.111887 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:07:01.111897 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:07:01.111910 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:07:01.111920 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:07:01.111930 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:07:01.111941 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:07:01.111951 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:07:01.111961 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:07:01.111971 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:07:01.111981 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:07:01.111992 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:07:01.112004 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:07:01.112014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:07:01.112025 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:07:01.112035 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:07:01.112045 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:07:01.112056 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:07:01.112068 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:07:01.112097 systemd-journald[185]: Collecting audit messages is disabled. Jan 29 12:07:01.112139 systemd-journald[185]: Journal started Jan 29 12:07:01.112162 systemd-journald[185]: Runtime Journal (/run/log/journal/2cd786e37315468285d92610a60215be) is 8.0M, max 78.3M, 70.3M free. Jan 29 12:07:01.108555 systemd-modules-load[186]: Inserted module 'overlay' Jan 29 12:07:01.126144 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:07:01.127006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:07:01.136289 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:07:01.142611 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:07:01.147791 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:07:01.150224 kernel: Bridge firewalling registered Jan 29 12:07:01.148922 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:07:01.150212 systemd-modules-load[186]: Inserted module 'br_netfilter' Jan 29 12:07:01.162548 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:07:01.163508 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:07:01.165692 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:07:01.166494 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:07:01.168261 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:07:01.173260 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:07:01.175243 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:07:01.189638 dracut-cmdline[215]: dracut-dracut-053 Jan 29 12:07:01.191551 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:07:01.197884 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 12:07:01.203539 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:07:01.235721 systemd-resolved[234]: Positive Trust Anchors: Jan 29 12:07:01.236516 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:07:01.237331 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:07:01.240013 systemd-resolved[234]: Defaulting to hostname 'linux'. Jan 29 12:07:01.240843 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:07:01.243084 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:07:01.270143 kernel: SCSI subsystem initialized Jan 29 12:07:01.280169 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:07:01.292434 kernel: iscsi: registered transport (tcp) Jan 29 12:07:01.314849 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:07:01.314909 kernel: QLogic iSCSI HBA Driver Jan 29 12:07:01.368620 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:07:01.374444 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:07:01.423292 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:07:01.423397 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:07:01.424676 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:07:01.471225 kernel: raid6: sse2x4 gen() 13145 MB/s Jan 29 12:07:01.489221 kernel: raid6: sse2x2 gen() 14881 MB/s Jan 29 12:07:01.507557 kernel: raid6: sse2x1 gen() 9903 MB/s Jan 29 12:07:01.507623 kernel: raid6: using algorithm sse2x2 gen() 14881 MB/s Jan 29 12:07:01.526680 kernel: raid6: .... xor() 9221 MB/s, rmw enabled Jan 29 12:07:01.526745 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 12:07:01.549624 kernel: xor: measuring software checksum speed Jan 29 12:07:01.549716 kernel: prefetch64-sse : 17279 MB/sec Jan 29 12:07:01.550190 kernel: generic_sse : 13981 MB/sec Jan 29 12:07:01.551415 kernel: xor: using function: prefetch64-sse (17279 MB/sec) Jan 29 12:07:01.757171 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:07:01.771025 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:07:01.777371 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:07:01.790312 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 29 12:07:01.794602 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:07:01.805383 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:07:01.828758 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jan 29 12:07:01.864765 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:07:01.869366 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:07:01.933574 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:07:01.938291 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:07:01.957165 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:07:01.960874 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:07:01.961639 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:07:01.963351 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:07:01.969547 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:07:01.986428 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:07:02.002158 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 29 12:07:02.043824 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 29 12:07:02.043943 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:07:02.043957 kernel: GPT:17805311 != 20971519 Jan 29 12:07:02.043969 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:07:02.043980 kernel: GPT:17805311 != 20971519 Jan 29 12:07:02.043997 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:07:02.044007 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:07:02.038888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:07:02.039014 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:07:02.039705 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:07:02.048853 kernel: libata version 3.00 loaded. Jan 29 12:07:02.040245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:07:02.040361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:07:02.040875 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:07:02.046385 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:07:02.052727 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 12:07:02.065797 kernel: scsi host0: ata_piix Jan 29 12:07:02.066081 kernel: scsi host1: ata_piix Jan 29 12:07:02.066354 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 29 12:07:02.066370 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 29 12:07:02.089137 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (462) Jan 29 12:07:02.096501 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 12:07:02.120908 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (460) Jan 29 12:07:02.121743 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:07:02.131866 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 12:07:02.136418 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 12:07:02.137000 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 12:07:02.143294 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:07:02.153280 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:07:02.155858 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:07:02.167059 disk-uuid[509]: Primary Header is updated. Jan 29 12:07:02.167059 disk-uuid[509]: Secondary Entries is updated. Jan 29 12:07:02.167059 disk-uuid[509]: Secondary Header is updated. Jan 29 12:07:02.177136 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:07:02.186177 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:07:03.198207 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:07:03.200711 disk-uuid[510]: The operation has completed successfully. Jan 29 12:07:03.273369 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:07:03.273590 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:07:03.302242 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:07:03.320374 sh[529]: Success Jan 29 12:07:03.360173 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 29 12:07:03.459987 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:07:03.478387 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:07:03.483514 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:07:03.507255 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 12:07:03.507328 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:07:03.507359 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:07:03.509424 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:07:03.511015 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:07:03.531599 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:07:03.534014 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:07:03.546526 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:07:03.552414 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:07:03.579238 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:07:03.579300 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:07:03.579330 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:07:03.587231 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:07:03.607438 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:07:03.612668 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:07:03.625017 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:07:03.637313 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:07:03.693710 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:07:03.701332 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:07:03.725408 systemd-networkd[712]: lo: Link UP Jan 29 12:07:03.725418 systemd-networkd[712]: lo: Gained carrier Jan 29 12:07:03.726616 systemd-networkd[712]: Enumeration completed Jan 29 12:07:03.726708 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:07:03.727780 systemd-networkd[712]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:07:03.727784 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:07:03.728769 systemd-networkd[712]: eth0: Link UP Jan 29 12:07:03.728773 systemd-networkd[712]: eth0: Gained carrier Jan 29 12:07:03.728779 systemd-networkd[712]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:07:03.729246 systemd[1]: Reached target network.target - Network. Jan 29 12:07:03.752720 systemd-networkd[712]: eth0: DHCPv4 address 172.24.4.127/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 12:07:03.793819 ignition[634]: Ignition 2.20.0 Jan 29 12:07:03.793834 ignition[634]: Stage: fetch-offline Jan 29 12:07:03.793921 ignition[634]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:07:03.795544 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:07:03.793931 ignition[634]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:07:03.794031 ignition[634]: parsed url from cmdline: "" Jan 29 12:07:03.794036 ignition[634]: no config URL provided Jan 29 12:07:03.794042 ignition[634]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:07:03.794050 ignition[634]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:07:03.794055 ignition[634]: failed to fetch config: resource requires networking Jan 29 12:07:03.794301 ignition[634]: Ignition finished successfully Jan 29 12:07:03.803280 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:07:03.815093 ignition[720]: Ignition 2.20.0 Jan 29 12:07:03.815106 ignition[720]: Stage: fetch Jan 29 12:07:03.815328 ignition[720]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:07:03.815340 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:07:03.815437 ignition[720]: parsed url from cmdline: "" Jan 29 12:07:03.815441 ignition[720]: no config URL provided Jan 29 12:07:03.815447 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:07:03.815455 ignition[720]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:07:03.815536 ignition[720]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 12:07:03.815719 ignition[720]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 12:07:03.815746 ignition[720]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 12:07:03.991076 ignition[720]: GET result: OK Jan 29 12:07:03.991264 ignition[720]: parsing config with SHA512: 3f4a437febea6be3eceb432a3dcd3170875f32a26fe71d8632ff145a5c8652d26e187d46d22679ce6829b907b92affcf83f92c5fcf0364b060bdb5dc21ef759b Jan 29 12:07:04.000253 unknown[720]: fetched base config from "system" Jan 29 12:07:04.000274 unknown[720]: fetched base config from "system" Jan 29 12:07:04.001060 ignition[720]: fetch: fetch complete Jan 29 12:07:04.000286 unknown[720]: fetched user config from "openstack" Jan 29 12:07:04.001070 ignition[720]: fetch: fetch passed Jan 29 12:07:04.004520 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:07:04.001474 ignition[720]: Ignition finished successfully Jan 29 12:07:04.013427 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:07:04.057702 ignition[726]: Ignition 2.20.0 Jan 29 12:07:04.057738 ignition[726]: Stage: kargs Jan 29 12:07:04.058261 ignition[726]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:07:04.058291 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:07:04.063178 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:07:04.060784 ignition[726]: kargs: kargs passed Jan 29 12:07:04.060889 ignition[726]: Ignition finished successfully Jan 29 12:07:04.075440 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:07:04.108405 ignition[733]: Ignition 2.20.0 Jan 29 12:07:04.110158 ignition[733]: Stage: disks Jan 29 12:07:04.110560 ignition[733]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:07:04.110585 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:07:04.117320 ignition[733]: disks: disks passed Jan 29 12:07:04.118745 ignition[733]: Ignition finished successfully Jan 29 12:07:04.121835 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:07:04.125548 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:07:04.127036 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:07:04.130093 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:07:04.133321 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:07:04.136014 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:07:04.149406 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:07:04.180537 systemd-fsck[741]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 12:07:04.192960 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:07:04.201399 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:07:04.378188 kernel: EXT4-fs (vda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 12:07:04.378512 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:07:04.379555 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:07:04.390184 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:07:04.393534 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:07:04.397050 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:07:04.401340 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 12:07:04.404396 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:07:04.427075 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (749) Jan 29 12:07:04.427171 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:07:04.427206 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:07:04.427235 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:07:04.427275 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:07:04.404429 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:07:04.407529 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:07:04.432440 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:07:04.447220 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:07:04.556507 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:07:04.568399 initrd-setup-root[784]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:07:04.576379 initrd-setup-root[791]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:07:04.580873 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:07:04.673891 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:07:04.677207 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:07:04.680241 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:07:04.688051 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:07:04.690716 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:07:04.713745 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:07:04.716101 ignition[867]: INFO : Ignition 2.20.0 Jan 29 12:07:04.716101 ignition[867]: INFO : Stage: mount Jan 29 12:07:04.717345 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:07:04.717345 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:07:04.717345 ignition[867]: INFO : mount: mount passed Jan 29 12:07:04.719908 ignition[867]: INFO : Ignition finished successfully Jan 29 12:07:04.718408 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:07:05.580672 systemd-networkd[712]: eth0: Gained IPv6LL Jan 29 12:07:11.659679 coreos-metadata[751]: Jan 29 12:07:11.659 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:07:11.701005 coreos-metadata[751]: Jan 29 12:07:11.700 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:07:11.716081 coreos-metadata[751]: Jan 29 12:07:11.715 INFO Fetch successful Jan 29 12:07:11.716081 coreos-metadata[751]: Jan 29 12:07:11.716 INFO wrote hostname ci-4152-2-0-f-33fdf09c6c.novalocal to /sysroot/etc/hostname Jan 29 12:07:11.719790 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 12:07:11.719974 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 12:07:11.732329 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:07:11.756755 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:07:11.773163 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (884) Jan 29 12:07:11.783165 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:07:11.783242 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:07:11.785785 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:07:11.800192 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:07:11.806590 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:07:11.849971 ignition[902]: INFO : Ignition 2.20.0 Jan 29 12:07:11.849971 ignition[902]: INFO : Stage: files Jan 29 12:07:11.852821 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:07:11.852821 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:07:11.852821 ignition[902]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:07:11.858510 ignition[902]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:07:11.858510 ignition[902]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:07:11.862543 ignition[902]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:07:11.862543 ignition[902]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:07:11.862543 ignition[902]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:07:11.862100 unknown[902]: wrote ssh authorized keys file for user: core Jan 29 12:07:11.869941 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:07:11.869941 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:07:11.869941 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:07:11.869941 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:07:11.930669 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 12:07:12.220051 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:07:12.220051 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:07:12.220051 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:07:12.220051 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:07:12.230353 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:07:12.230353 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:07:12.230353 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:07:12.230353 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:07:12.230353 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:07:12.230353 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:07:12.230353 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:07:12.230353 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:07:12.230353 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:07:12.230353 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:07:12.230353 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 12:07:12.763517 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 12:07:14.377884 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:07:14.377884 ignition[902]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 29 12:07:14.383105 ignition[902]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:07:14.383105 ignition[902]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:07:14.383105 ignition[902]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 29 12:07:14.383105 ignition[902]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 29 12:07:14.383105 ignition[902]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:07:14.383105 ignition[902]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:07:14.383105 ignition[902]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 29 12:07:14.383105 ignition[902]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:07:14.383105 ignition[902]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:07:14.383105 ignition[902]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:07:14.383105 ignition[902]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:07:14.383105 ignition[902]: INFO : files: files passed Jan 29 12:07:14.383105 ignition[902]: INFO : Ignition finished successfully Jan 29 12:07:14.381516 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:07:14.391448 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:07:14.395298 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:07:14.404831 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:07:14.420421 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:07:14.420421 initrd-setup-root-after-ignition[930]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:07:14.404946 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:07:14.424882 initrd-setup-root-after-ignition[934]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:07:14.428531 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:07:14.434142 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:07:14.440426 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:07:14.479812 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:07:14.479917 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:07:14.480739 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:07:14.482529 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:07:14.484971 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:07:14.490271 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:07:14.507982 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:07:14.514295 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:07:14.528412 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:07:14.529089 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:07:14.531401 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:07:14.533548 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:07:14.533664 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:07:14.536072 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:07:14.537174 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:07:14.539381 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:07:14.541164 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:07:14.542989 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:07:14.545093 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:07:14.547252 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:07:14.549427 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:07:14.551401 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:07:14.553210 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:07:14.554523 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:07:14.554674 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:07:14.555837 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:07:14.556582 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:07:14.557562 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:07:14.559259 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:07:14.560157 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:07:14.560272 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:07:14.561636 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:07:14.561781 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:07:14.562528 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:07:14.562676 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:07:14.572540 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:07:14.575342 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:07:14.575883 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:07:14.576049 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:07:14.577886 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:07:14.578040 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:07:14.586078 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:07:14.586183 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:07:14.592569 ignition[954]: INFO : Ignition 2.20.0 Jan 29 12:07:14.592569 ignition[954]: INFO : Stage: umount Jan 29 12:07:14.595216 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:07:14.595216 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:07:14.595216 ignition[954]: INFO : umount: umount passed Jan 29 12:07:14.595216 ignition[954]: INFO : Ignition finished successfully Jan 29 12:07:14.596832 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:07:14.597479 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:07:14.598968 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:07:14.599038 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:07:14.601310 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:07:14.601350 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:07:14.602530 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:07:14.602568 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:07:14.603653 systemd[1]: Stopped target network.target - Network. Jan 29 12:07:14.605437 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:07:14.605501 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:07:14.607159 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:07:14.609602 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:07:14.611216 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:07:14.612024 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:07:14.613266 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:07:14.614264 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:07:14.614299 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:07:14.615226 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:07:14.615259 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:07:14.616173 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:07:14.616215 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:07:14.617143 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:07:14.617183 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:07:14.618214 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:07:14.619331 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:07:14.621231 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:07:14.621704 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:07:14.621785 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:07:14.622820 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:07:14.622886 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:07:14.623204 systemd-networkd[712]: eth0: DHCPv6 lease lost Jan 29 12:07:14.624425 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:07:14.624513 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:07:14.625869 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:07:14.625914 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:07:14.633281 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:07:14.636835 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:07:14.636889 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:07:14.638161 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:07:14.640202 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:07:14.640283 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:07:14.646440 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:07:14.646588 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:07:14.648994 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:07:14.649069 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:07:14.651015 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:07:14.651067 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:07:14.652516 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:07:14.652547 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:07:14.653663 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:07:14.653704 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:07:14.655287 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:07:14.655327 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:07:14.656297 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:07:14.656337 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:07:14.668242 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:07:14.669450 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:07:14.669500 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:07:14.670023 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:07:14.670062 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:07:14.670588 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:07:14.670626 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:07:14.673022 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:07:14.673075 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:07:14.673733 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:07:14.673772 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:07:14.674942 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:07:14.674980 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:07:14.676256 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:07:14.676293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:07:14.677736 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:07:14.677815 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:07:14.678841 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:07:14.686702 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:07:14.693038 systemd[1]: Switching root. Jan 29 12:07:14.718697 systemd-journald[185]: Journal stopped Jan 29 12:07:16.337038 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jan 29 12:07:16.337091 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:07:16.337202 kernel: SELinux: policy capability open_perms=1 Jan 29 12:07:16.337219 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:07:16.337231 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:07:16.337242 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:07:16.337258 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:07:16.337269 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:07:16.337284 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:07:16.337295 kernel: audit: type=1403 audit(1738152435.433:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:07:16.337308 systemd[1]: Successfully loaded SELinux policy in 72.776ms. Jan 29 12:07:16.337328 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.457ms. Jan 29 12:07:16.337341 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:07:16.337354 systemd[1]: Detected virtualization kvm. Jan 29 12:07:16.337369 systemd[1]: Detected architecture x86-64. Jan 29 12:07:16.337381 systemd[1]: Detected first boot. Jan 29 12:07:16.337393 systemd[1]: Hostname set to . Jan 29 12:07:16.337405 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:07:16.337416 zram_generator::config[1014]: No configuration found. Jan 29 12:07:16.337429 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:07:16.337441 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:07:16.337453 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 12:07:16.337468 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:07:16.337481 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:07:16.337493 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:07:16.337504 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:07:16.337517 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:07:16.337529 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:07:16.337541 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:07:16.337553 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:07:16.337565 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:07:16.337579 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:07:16.337591 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:07:16.337603 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:07:16.337616 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:07:16.337628 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:07:16.337640 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:07:16.337652 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:07:16.340163 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:07:16.340186 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:07:16.340204 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:07:16.340217 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:07:16.340230 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:07:16.340243 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:07:16.340258 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:07:16.340270 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:07:16.340285 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:07:16.340297 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:07:16.340309 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:07:16.340321 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:07:16.340337 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:07:16.340349 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:07:16.340362 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:07:16.340374 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:07:16.340386 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:07:16.340400 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:07:16.340412 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:07:16.340425 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:07:16.340436 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:07:16.340449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:07:16.340461 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:07:16.340473 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:07:16.340487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:07:16.340500 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:07:16.340517 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:07:16.340529 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:07:16.340541 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:07:16.340553 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:07:16.340565 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 12:07:16.340578 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 12:07:16.340589 kernel: fuse: init (API version 7.39) Jan 29 12:07:16.340601 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:07:16.340614 kernel: loop: module loaded Jan 29 12:07:16.340626 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:07:16.340638 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:07:16.340650 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:07:16.340662 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:07:16.340674 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:07:16.340686 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:07:16.340723 systemd-journald[1125]: Collecting audit messages is disabled. Jan 29 12:07:16.340753 kernel: ACPI: bus type drm_connector registered Jan 29 12:07:16.340766 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:07:16.340779 systemd-journald[1125]: Journal started Jan 29 12:07:16.340805 systemd-journald[1125]: Runtime Journal (/run/log/journal/2cd786e37315468285d92610a60215be) is 8.0M, max 78.3M, 70.3M free. Jan 29 12:07:16.345281 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:07:16.346410 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:07:16.347206 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:07:16.347838 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:07:16.348493 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:07:16.349328 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:07:16.350305 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:07:16.351207 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:07:16.351435 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:07:16.352666 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:07:16.352883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:07:16.353694 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:07:16.353829 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:07:16.354773 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:07:16.354983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:07:16.355953 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:07:16.356090 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:07:16.356911 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:07:16.357251 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:07:16.358082 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:07:16.358938 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:07:16.360148 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:07:16.367956 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:07:16.376215 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:07:16.379200 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:07:16.379870 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:07:16.388531 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:07:16.392038 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:07:16.394005 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:07:16.400294 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:07:16.401389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:07:16.413085 systemd-journald[1125]: Time spent on flushing to /var/log/journal/2cd786e37315468285d92610a60215be is 29.106ms for 929 entries. Jan 29 12:07:16.413085 systemd-journald[1125]: System Journal (/var/log/journal/2cd786e37315468285d92610a60215be) is 8.0M, max 584.8M, 576.8M free. Jan 29 12:07:16.461645 systemd-journald[1125]: Received client request to flush runtime journal. Jan 29 12:07:16.415123 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:07:16.416964 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:07:16.421416 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:07:16.422676 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:07:16.432467 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:07:16.433764 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:07:16.455847 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:07:16.467450 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:07:16.470457 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:07:16.480304 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:07:16.481351 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 29 12:07:16.481366 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 29 12:07:16.488269 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:07:16.492372 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:07:16.505434 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 12:07:16.548425 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:07:16.558266 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:07:16.570806 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 29 12:07:16.570828 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 29 12:07:16.574589 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:07:17.133790 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:07:17.141352 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:07:17.164633 systemd-udevd[1198]: Using default interface naming scheme 'v255'. Jan 29 12:07:17.196752 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:07:17.211468 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:07:17.252264 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 29 12:07:17.265427 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:07:17.305155 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1212) Jan 29 12:07:17.324854 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:07:17.405163 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 12:07:17.408163 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 12:07:17.423310 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:07:17.426775 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:07:17.430942 systemd-networkd[1206]: lo: Link UP Jan 29 12:07:17.430952 systemd-networkd[1206]: lo: Gained carrier Jan 29 12:07:17.432539 systemd-networkd[1206]: Enumeration completed Jan 29 12:07:17.432785 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:07:17.434993 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:07:17.435002 systemd-networkd[1206]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:07:17.435711 systemd-networkd[1206]: eth0: Link UP Jan 29 12:07:17.435715 systemd-networkd[1206]: eth0: Gained carrier Jan 29 12:07:17.435729 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:07:17.443242 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:07:17.451150 systemd-networkd[1206]: eth0: DHCPv4 address 172.24.4.127/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 12:07:17.456198 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 12:07:17.482154 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:07:17.488392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:07:17.497754 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 12:07:17.497805 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 12:07:17.504130 kernel: Console: switching to colour dummy device 80x25 Jan 29 12:07:17.504184 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 12:07:17.504220 kernel: [drm] features: -context_init Jan 29 12:07:17.504234 kernel: [drm] number of scanouts: 1 Jan 29 12:07:17.505306 kernel: [drm] number of cap sets: 0 Jan 29 12:07:17.509212 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 12:07:17.513241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:07:17.513474 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:07:17.517171 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 12:07:17.517209 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 12:07:17.524332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:07:17.530650 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 12:07:17.540691 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:07:17.540903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:07:17.543900 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:07:17.544318 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:07:17.552252 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:07:17.569704 lvm[1246]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:07:17.598214 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:07:17.600419 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:07:17.607403 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:07:17.613086 lvm[1251]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:07:17.629157 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:07:17.645423 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:07:17.645603 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:07:17.645692 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:07:17.645712 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:07:17.645787 systemd[1]: Reached target machines.target - Containers. Jan 29 12:07:17.647521 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:07:17.654450 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:07:17.660397 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:07:17.662425 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:07:17.665319 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:07:17.670257 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:07:17.685471 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:07:17.694221 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:07:17.698461 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:07:17.709153 kernel: loop0: detected capacity change from 0 to 210664 Jan 29 12:07:17.761604 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:07:17.767675 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:07:17.786761 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:07:17.820577 kernel: loop1: detected capacity change from 0 to 8 Jan 29 12:07:17.857498 kernel: loop2: detected capacity change from 0 to 140992 Jan 29 12:07:17.923244 kernel: loop3: detected capacity change from 0 to 138184 Jan 29 12:07:17.990512 kernel: loop4: detected capacity change from 0 to 210664 Jan 29 12:07:18.037478 kernel: loop5: detected capacity change from 0 to 8 Jan 29 12:07:18.047738 kernel: loop6: detected capacity change from 0 to 140992 Jan 29 12:07:18.132224 kernel: loop7: detected capacity change from 0 to 138184 Jan 29 12:07:18.190409 (sd-merge)[1276]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 12:07:18.191613 (sd-merge)[1276]: Merged extensions into '/usr'. Jan 29 12:07:18.200379 systemd[1]: Reloading requested from client PID 1263 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:07:18.200413 systemd[1]: Reloading... Jan 29 12:07:18.281885 zram_generator::config[1304]: No configuration found. Jan 29 12:07:18.454898 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:07:18.527341 systemd[1]: Reloading finished in 326 ms. Jan 29 12:07:18.541451 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:07:18.555278 systemd[1]: Starting ensure-sysext.service... Jan 29 12:07:18.562325 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:07:18.569872 systemd[1]: Reloading requested from client PID 1365 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:07:18.569890 systemd[1]: Reloading... Jan 29 12:07:18.596098 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:07:18.596473 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:07:18.597347 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:07:18.597652 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Jan 29 12:07:18.597707 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Jan 29 12:07:18.601352 systemd-tmpfiles[1366]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:07:18.601437 systemd-tmpfiles[1366]: Skipping /boot Jan 29 12:07:18.622785 systemd-tmpfiles[1366]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:07:18.623136 systemd-tmpfiles[1366]: Skipping /boot Jan 29 12:07:18.660704 ldconfig[1260]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:07:18.667225 zram_generator::config[1393]: No configuration found. Jan 29 12:07:18.810950 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:07:18.875214 systemd[1]: Reloading finished in 304 ms. Jan 29 12:07:18.892772 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:07:18.894122 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:07:18.916341 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 12:07:18.927052 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:07:18.931690 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:07:18.949554 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:07:18.957189 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:07:18.965688 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:07:18.965938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:07:18.972446 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:07:18.991959 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:07:19.001410 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:07:19.002965 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:07:19.003097 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:07:19.008490 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:07:19.008688 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:07:19.015294 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:07:19.018825 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:07:19.028761 augenrules[1492]: No rules Jan 29 12:07:19.031346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:07:19.031515 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:07:19.032717 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:07:19.032923 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 12:07:19.038313 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:07:19.038481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:07:19.050921 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:07:19.061399 systemd[1]: Finished ensure-sysext.service. Jan 29 12:07:19.068933 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:07:19.074295 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 12:07:19.077721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:07:19.088317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:07:19.096758 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:07:19.107337 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:07:19.116472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:07:19.117215 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:07:19.130512 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:07:19.139977 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:07:19.141718 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:07:19.147041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:07:19.148340 systemd-networkd[1206]: eth0: Gained IPv6LL Jan 29 12:07:19.149080 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:07:19.149932 systemd-resolved[1464]: Positive Trust Anchors: Jan 29 12:07:19.149943 systemd-resolved[1464]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:07:19.149990 systemd-resolved[1464]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:07:19.153529 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:07:19.153699 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:07:19.159041 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:07:19.159512 augenrules[1510]: /sbin/augenrules: No change Jan 29 12:07:19.161074 systemd-resolved[1464]: Using system hostname 'ci-4152-2-0-f-33fdf09c6c.novalocal'. Jan 29 12:07:19.162990 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:07:19.164670 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:07:19.168344 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:07:19.168511 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:07:19.175248 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:07:19.184229 systemd[1]: Reached target network.target - Network. Jan 29 12:07:19.191002 augenrules[1544]: No rules Jan 29 12:07:19.187591 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:07:19.188099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:07:19.190813 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:07:19.190872 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:07:19.195423 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:07:19.198688 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:07:19.198908 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 12:07:19.204184 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:07:19.207098 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:07:19.253421 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:07:19.255999 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:07:19.259930 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:07:19.260814 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:07:19.262952 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:07:19.264076 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:07:19.264203 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:07:19.265237 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:07:19.266402 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:07:19.267429 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:07:19.268639 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:07:19.270786 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:07:19.273998 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:07:20.055737 systemd-timesyncd[1522]: Contacted time server 37.187.118.149:123 (0.flatcar.pool.ntp.org). Jan 29 12:07:20.055782 systemd-timesyncd[1522]: Initial clock synchronization to Wed 2025-01-29 12:07:20.055656 UTC. Jan 29 12:07:20.057287 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:07:20.062335 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:07:20.063528 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:07:20.064242 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:07:20.064982 systemd-resolved[1464]: Clock change detected. Flushing caches. Jan 29 12:07:20.065070 systemd[1]: System is tainted: cgroupsv1 Jan 29 12:07:20.065168 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:07:20.065253 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:07:20.066925 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:07:20.077767 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:07:20.084955 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:07:20.091217 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:07:20.102567 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:07:20.107453 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:07:20.109637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:07:20.119014 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:07:20.119746 jq[1564]: false Jan 29 12:07:20.132014 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:07:20.145977 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:07:20.153062 extend-filesystems[1565]: Found loop4 Jan 29 12:07:20.155795 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:07:20.161955 extend-filesystems[1565]: Found loop5 Jan 29 12:07:20.161955 extend-filesystems[1565]: Found loop6 Jan 29 12:07:20.161955 extend-filesystems[1565]: Found loop7 Jan 29 12:07:20.161955 extend-filesystems[1565]: Found vda Jan 29 12:07:20.161955 extend-filesystems[1565]: Found vda1 Jan 29 12:07:20.161955 extend-filesystems[1565]: Found vda2 Jan 29 12:07:20.161955 extend-filesystems[1565]: Found vda3 Jan 29 12:07:20.161955 extend-filesystems[1565]: Found usr Jan 29 12:07:20.161955 extend-filesystems[1565]: Found vda4 Jan 29 12:07:20.161955 extend-filesystems[1565]: Found vda6 Jan 29 12:07:20.161955 extend-filesystems[1565]: Found vda7 Jan 29 12:07:20.161955 extend-filesystems[1565]: Found vda9 Jan 29 12:07:20.161955 extend-filesystems[1565]: Checking size of /dev/vda9 Jan 29 12:07:20.198505 dbus-daemon[1561]: [system] SELinux support is enabled Jan 29 12:07:20.170106 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:07:20.188979 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:07:20.192650 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:07:20.209976 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:07:20.222933 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:07:20.230193 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:07:20.236855 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1200) Jan 29 12:07:20.236947 extend-filesystems[1565]: Resized partition /dev/vda9 Jan 29 12:07:20.246528 update_engine[1589]: I20250129 12:07:20.246414 1589 main.cc:92] Flatcar Update Engine starting Jan 29 12:07:20.250134 update_engine[1589]: I20250129 12:07:20.248101 1589 update_check_scheduler.cc:74] Next update check in 6m58s Jan 29 12:07:20.250960 extend-filesystems[1598]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:07:20.253420 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:07:20.253682 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:07:20.256505 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:07:20.256753 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:07:20.271823 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:07:20.281457 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 29 12:07:20.281540 jq[1595]: true Jan 29 12:07:20.298808 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 29 12:07:20.293365 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:07:20.293611 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:07:20.318189 (ntainerd)[1609]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:07:20.344985 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:07:20.348929 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:07:20.377759 jq[1606]: true Jan 29 12:07:20.348971 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:07:20.349579 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:07:20.349597 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:07:20.353532 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:07:20.360973 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:07:20.369217 systemd-logind[1587]: New seat seat0. Jan 29 12:07:20.377303 systemd-logind[1587]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 12:07:20.394877 extend-filesystems[1598]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 12:07:20.394877 extend-filesystems[1598]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:07:20.394877 extend-filesystems[1598]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 29 12:07:20.377319 systemd-logind[1587]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:07:20.401365 extend-filesystems[1565]: Resized filesystem in /dev/vda9 Jan 29 12:07:20.380402 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:07:20.396696 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:07:20.396963 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:07:20.419342 tar[1605]: linux-amd64/helm Jan 29 12:07:20.462942 bash[1633]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:07:20.463934 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:07:20.482117 systemd[1]: Starting sshkeys.service... Jan 29 12:07:20.522227 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:07:20.533607 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:07:20.557009 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:07:20.813615 sshd_keygen[1602]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:07:20.828234 containerd[1609]: time="2025-01-29T12:07:20.828157907Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 12:07:20.864373 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:07:20.876021 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:07:20.891297 containerd[1609]: time="2025-01-29T12:07:20.891266287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:07:20.893851 containerd[1609]: time="2025-01-29T12:07:20.893799288Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:07:20.893940 containerd[1609]: time="2025-01-29T12:07:20.893924923Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:07:20.893999 containerd[1609]: time="2025-01-29T12:07:20.893985868Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:07:20.894205 containerd[1609]: time="2025-01-29T12:07:20.894187706Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:07:20.894282 containerd[1609]: time="2025-01-29T12:07:20.894266925Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:07:20.894401 containerd[1609]: time="2025-01-29T12:07:20.894381520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:07:20.894463 containerd[1609]: time="2025-01-29T12:07:20.894449667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:07:20.894729 containerd[1609]: time="2025-01-29T12:07:20.894708313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:07:20.895626 containerd[1609]: time="2025-01-29T12:07:20.895610444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:07:20.895693 containerd[1609]: time="2025-01-29T12:07:20.895677530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:07:20.895742 containerd[1609]: time="2025-01-29T12:07:20.895730490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:07:20.895894 containerd[1609]: time="2025-01-29T12:07:20.895876554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:07:20.896161 containerd[1609]: time="2025-01-29T12:07:20.896142192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:07:20.896650 containerd[1609]: time="2025-01-29T12:07:20.896468884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:07:20.896650 containerd[1609]: time="2025-01-29T12:07:20.896489002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:07:20.896650 containerd[1609]: time="2025-01-29T12:07:20.896573160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:07:20.896650 containerd[1609]: time="2025-01-29T12:07:20.896625057Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:07:20.899099 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:07:20.899329 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:07:20.912206 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.924681678Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.924751669Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.924770705Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.924793117Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.924808816Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.924965560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.925267897Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.925362595Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.925380178Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.925394675Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.925409112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.925423339Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.925436343Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:07:20.925897 containerd[1609]: time="2025-01-29T12:07:20.925449948Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925464476Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925478943Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925492989Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925505913Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925526442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925567278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925582667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925597004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925609748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925622963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925635386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925651016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925664912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926230 containerd[1609]: time="2025-01-29T12:07:20.925680230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926516 containerd[1609]: time="2025-01-29T12:07:20.925693074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926516 containerd[1609]: time="2025-01-29T12:07:20.925705969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926516 containerd[1609]: time="2025-01-29T12:07:20.925718502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926516 containerd[1609]: time="2025-01-29T12:07:20.925733500Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:07:20.926516 containerd[1609]: time="2025-01-29T12:07:20.925754991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926516 containerd[1609]: time="2025-01-29T12:07:20.925769999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.926516 containerd[1609]: time="2025-01-29T12:07:20.925782873Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:07:20.926516 containerd[1609]: time="2025-01-29T12:07:20.925817708Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:07:20.927859 containerd[1609]: time="2025-01-29T12:07:20.927800206Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:07:20.927942 containerd[1609]: time="2025-01-29T12:07:20.927927525Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:07:20.928022 containerd[1609]: time="2025-01-29T12:07:20.928006593Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:07:20.928109 containerd[1609]: time="2025-01-29T12:07:20.928094188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.928196 containerd[1609]: time="2025-01-29T12:07:20.928178205Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:07:20.929453 containerd[1609]: time="2025-01-29T12:07:20.928258586Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:07:20.929453 containerd[1609]: time="2025-01-29T12:07:20.928277832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:07:20.929984 containerd[1609]: time="2025-01-29T12:07:20.929929189Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:07:20.930304 containerd[1609]: time="2025-01-29T12:07:20.930156726Z" level=info msg="Connect containerd service" Jan 29 12:07:20.930304 containerd[1609]: time="2025-01-29T12:07:20.930197382Z" level=info msg="using legacy CRI server" Jan 29 12:07:20.930304 containerd[1609]: time="2025-01-29T12:07:20.930226817Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:07:20.930451 containerd[1609]: time="2025-01-29T12:07:20.930435990Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:07:20.932249 containerd[1609]: time="2025-01-29T12:07:20.932211620Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:07:20.932731 containerd[1609]: time="2025-01-29T12:07:20.932308942Z" level=info msg="Start subscribing containerd event" Jan 29 12:07:20.932731 containerd[1609]: time="2025-01-29T12:07:20.932521892Z" level=info msg="Start recovering state" Jan 29 12:07:20.932731 containerd[1609]: time="2025-01-29T12:07:20.932577095Z" level=info msg="Start event monitor" Jan 29 12:07:20.932731 containerd[1609]: time="2025-01-29T12:07:20.932589058Z" level=info msg="Start snapshots syncer" Jan 29 12:07:20.932731 containerd[1609]: time="2025-01-29T12:07:20.932597995Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:07:20.932731 containerd[1609]: time="2025-01-29T12:07:20.932606851Z" level=info msg="Start streaming server" Jan 29 12:07:20.934068 containerd[1609]: time="2025-01-29T12:07:20.934051300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:07:20.934267 containerd[1609]: time="2025-01-29T12:07:20.934195070Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:07:20.942917 containerd[1609]: time="2025-01-29T12:07:20.938638744Z" level=info msg="containerd successfully booted in 0.115324s" Jan 29 12:07:20.938870 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:07:20.942331 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:07:20.952352 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:07:20.960218 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:07:20.962672 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:07:21.121350 tar[1605]: linux-amd64/LICENSE Jan 29 12:07:21.121350 tar[1605]: linux-amd64/README.md Jan 29 12:07:21.131434 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:07:22.164129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:07:22.187597 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:07:23.497619 kubelet[1694]: E0129 12:07:23.497446 1694 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:07:23.501864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:07:23.502261 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:07:26.039777 login[1677]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 29 12:07:26.045221 login[1678]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:07:26.071055 systemd-logind[1587]: New session 1 of user core. Jan 29 12:07:26.074609 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:07:26.085427 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:07:26.116744 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:07:26.131549 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:07:26.135734 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:07:26.265577 systemd[1713]: Queued start job for default target default.target. Jan 29 12:07:26.266158 systemd[1713]: Created slice app.slice - User Application Slice. Jan 29 12:07:26.266260 systemd[1713]: Reached target paths.target - Paths. Jan 29 12:07:26.266343 systemd[1713]: Reached target timers.target - Timers. Jan 29 12:07:26.276928 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:07:26.286471 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:07:26.286642 systemd[1713]: Reached target sockets.target - Sockets. Jan 29 12:07:26.286735 systemd[1713]: Reached target basic.target - Basic System. Jan 29 12:07:26.287314 systemd[1713]: Reached target default.target - Main User Target. Jan 29 12:07:26.287341 systemd[1713]: Startup finished in 145ms. Jan 29 12:07:26.287561 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:07:26.295257 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:07:27.040548 login[1677]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:07:27.050749 systemd-logind[1587]: New session 2 of user core. Jan 29 12:07:27.058479 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:07:27.164169 coreos-metadata[1559]: Jan 29 12:07:27.164 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:07:27.211649 coreos-metadata[1559]: Jan 29 12:07:27.211 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 12:07:27.400970 coreos-metadata[1559]: Jan 29 12:07:27.400 INFO Fetch successful Jan 29 12:07:27.400970 coreos-metadata[1559]: Jan 29 12:07:27.400 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:07:27.415782 coreos-metadata[1559]: Jan 29 12:07:27.415 INFO Fetch successful Jan 29 12:07:27.415961 coreos-metadata[1559]: Jan 29 12:07:27.415 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 12:07:27.430030 coreos-metadata[1559]: Jan 29 12:07:27.429 INFO Fetch successful Jan 29 12:07:27.430030 coreos-metadata[1559]: Jan 29 12:07:27.429 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 12:07:27.442340 coreos-metadata[1559]: Jan 29 12:07:27.442 INFO Fetch successful Jan 29 12:07:27.442340 coreos-metadata[1559]: Jan 29 12:07:27.442 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 12:07:27.456292 coreos-metadata[1559]: Jan 29 12:07:27.456 INFO Fetch successful Jan 29 12:07:27.456292 coreos-metadata[1559]: Jan 29 12:07:27.456 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 12:07:27.470494 coreos-metadata[1559]: Jan 29 12:07:27.470 INFO Fetch successful Jan 29 12:07:27.529520 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:07:27.531374 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:07:27.624379 coreos-metadata[1646]: Jan 29 12:07:27.624 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:07:27.665640 coreos-metadata[1646]: Jan 29 12:07:27.665 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 12:07:27.681139 coreos-metadata[1646]: Jan 29 12:07:27.681 INFO Fetch successful Jan 29 12:07:27.681139 coreos-metadata[1646]: Jan 29 12:07:27.681 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 12:07:27.696723 coreos-metadata[1646]: Jan 29 12:07:27.696 INFO Fetch successful Jan 29 12:07:27.704078 unknown[1646]: wrote ssh authorized keys file for user: core Jan 29 12:07:27.746401 update-ssh-keys[1757]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:07:27.747505 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:07:27.761971 systemd[1]: Finished sshkeys.service. Jan 29 12:07:27.764388 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:07:27.765408 systemd[1]: Startup finished in 15.832s (kernel) + 11.624s (userspace) = 27.456s. Jan 29 12:07:28.767739 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:07:28.780415 systemd[1]: Started sshd@0-172.24.4.127:22-172.24.4.1:51172.service - OpenSSH per-connection server daemon (172.24.4.1:51172). Jan 29 12:07:29.765764 sshd[1764]: Accepted publickey for core from 172.24.4.1 port 51172 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:07:29.768155 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:29.778358 systemd-logind[1587]: New session 3 of user core. Jan 29 12:07:29.789461 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:07:30.377388 systemd[1]: Started sshd@1-172.24.4.127:22-172.24.4.1:51184.service - OpenSSH per-connection server daemon (172.24.4.1:51184). Jan 29 12:07:32.610878 sshd[1769]: Accepted publickey for core from 172.24.4.1 port 51184 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:07:32.613464 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:32.623922 systemd-logind[1587]: New session 4 of user core. Jan 29 12:07:32.630432 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:07:33.440786 sshd[1772]: Connection closed by 172.24.4.1 port 51184 Jan 29 12:07:33.441076 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:33.454392 systemd[1]: Started sshd@2-172.24.4.127:22-172.24.4.1:51188.service - OpenSSH per-connection server daemon (172.24.4.1:51188). Jan 29 12:07:33.457424 systemd[1]: sshd@1-172.24.4.127:22-172.24.4.1:51184.service: Deactivated successfully. Jan 29 12:07:33.463506 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:07:33.467158 systemd-logind[1587]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:07:33.470380 systemd-logind[1587]: Removed session 4. Jan 29 12:07:33.534879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:07:33.551211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:07:33.859254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:07:33.863894 (kubelet)[1791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:07:33.940721 kubelet[1791]: E0129 12:07:33.940651 1791 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:07:33.947658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:07:33.948088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:07:34.870760 sshd[1774]: Accepted publickey for core from 172.24.4.1 port 51188 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:07:34.873236 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:34.883668 systemd-logind[1587]: New session 5 of user core. Jan 29 12:07:34.890405 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:07:35.653870 sshd[1801]: Connection closed by 172.24.4.1 port 51188 Jan 29 12:07:35.654182 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:35.664456 systemd[1]: Started sshd@3-172.24.4.127:22-172.24.4.1:53646.service - OpenSSH per-connection server daemon (172.24.4.1:53646). Jan 29 12:07:35.665508 systemd[1]: sshd@2-172.24.4.127:22-172.24.4.1:51188.service: Deactivated successfully. Jan 29 12:07:35.673105 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:07:35.676345 systemd-logind[1587]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:07:35.681884 systemd-logind[1587]: Removed session 5. Jan 29 12:07:37.088605 sshd[1803]: Accepted publickey for core from 172.24.4.1 port 53646 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:07:37.091166 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:37.102511 systemd-logind[1587]: New session 6 of user core. Jan 29 12:07:37.109405 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:07:37.730878 sshd[1809]: Connection closed by 172.24.4.1 port 53646 Jan 29 12:07:37.731440 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:37.742432 systemd[1]: Started sshd@4-172.24.4.127:22-172.24.4.1:53648.service - OpenSSH per-connection server daemon (172.24.4.1:53648). Jan 29 12:07:37.743658 systemd[1]: sshd@3-172.24.4.127:22-172.24.4.1:53646.service: Deactivated successfully. Jan 29 12:07:37.756168 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:07:37.758581 systemd-logind[1587]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:07:37.763903 systemd-logind[1587]: Removed session 6. Jan 29 12:07:39.185192 sshd[1811]: Accepted publickey for core from 172.24.4.1 port 53648 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:07:39.187816 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:39.196498 systemd-logind[1587]: New session 7 of user core. Jan 29 12:07:39.207428 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:07:39.625389 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:07:39.626038 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:07:40.214391 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:07:40.214928 (dockerd)[1835]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:07:40.771138 dockerd[1835]: time="2025-01-29T12:07:40.771084220Z" level=info msg="Starting up" Jan 29 12:07:41.188561 dockerd[1835]: time="2025-01-29T12:07:41.188073864Z" level=info msg="Loading containers: start." Jan 29 12:07:41.451143 kernel: Initializing XFRM netlink socket Jan 29 12:07:41.624283 systemd-networkd[1206]: docker0: Link UP Jan 29 12:07:41.670676 dockerd[1835]: time="2025-01-29T12:07:41.670575947Z" level=info msg="Loading containers: done." Jan 29 12:07:41.710990 dockerd[1835]: time="2025-01-29T12:07:41.710900385Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:07:41.711208 dockerd[1835]: time="2025-01-29T12:07:41.711092135Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 12:07:41.711361 dockerd[1835]: time="2025-01-29T12:07:41.711307349Z" level=info msg="Daemon has completed initialization" Jan 29 12:07:41.712891 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1319219175-merged.mount: Deactivated successfully. Jan 29 12:07:41.779277 dockerd[1835]: time="2025-01-29T12:07:41.779180525Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:07:41.782394 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:07:43.731692 containerd[1609]: time="2025-01-29T12:07:43.731093898Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:07:44.034705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:07:44.046177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:07:44.447153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:07:44.457204 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:07:44.541326 kubelet[2040]: E0129 12:07:44.541247 2040 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:07:44.543584 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:07:44.544268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:07:44.864963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount88235552.mount: Deactivated successfully. Jan 29 12:07:46.734608 containerd[1609]: time="2025-01-29T12:07:46.734543156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:46.736202 containerd[1609]: time="2025-01-29T12:07:46.736170248Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 29 12:07:46.737771 containerd[1609]: time="2025-01-29T12:07:46.737710607Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:46.742162 containerd[1609]: time="2025-01-29T12:07:46.742095781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:46.744070 containerd[1609]: time="2025-01-29T12:07:46.743244265Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 3.012085696s" Jan 29 12:07:46.744070 containerd[1609]: time="2025-01-29T12:07:46.743275664Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 12:07:46.768425 containerd[1609]: time="2025-01-29T12:07:46.768318852Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:07:48.967116 containerd[1609]: time="2025-01-29T12:07:48.966947680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:48.968575 containerd[1609]: time="2025-01-29T12:07:48.968375318Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 29 12:07:48.969743 containerd[1609]: time="2025-01-29T12:07:48.969679794Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:48.973149 containerd[1609]: time="2025-01-29T12:07:48.973087646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:48.974389 containerd[1609]: time="2025-01-29T12:07:48.974262269Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.205904514s" Jan 29 12:07:48.974389 containerd[1609]: time="2025-01-29T12:07:48.974292095Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 12:07:48.997098 containerd[1609]: time="2025-01-29T12:07:48.997022105Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:07:50.542847 containerd[1609]: time="2025-01-29T12:07:50.542778731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:50.544672 containerd[1609]: time="2025-01-29T12:07:50.544619403Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 29 12:07:50.546220 containerd[1609]: time="2025-01-29T12:07:50.546172787Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:50.549622 containerd[1609]: time="2025-01-29T12:07:50.549579616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:50.550889 containerd[1609]: time="2025-01-29T12:07:50.550851972Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.553770517s" Jan 29 12:07:50.550968 containerd[1609]: time="2025-01-29T12:07:50.550891326Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 12:07:50.573121 containerd[1609]: time="2025-01-29T12:07:50.573018586Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:07:51.999814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3422898192.mount: Deactivated successfully. Jan 29 12:07:52.721404 containerd[1609]: time="2025-01-29T12:07:52.721248130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:52.724235 containerd[1609]: time="2025-01-29T12:07:52.724119114Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 29 12:07:52.726085 containerd[1609]: time="2025-01-29T12:07:52.725956871Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:52.734655 containerd[1609]: time="2025-01-29T12:07:52.734549717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:52.737204 containerd[1609]: time="2025-01-29T12:07:52.736484095Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.163415075s" Jan 29 12:07:52.737204 containerd[1609]: time="2025-01-29T12:07:52.736560117Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 12:07:52.785222 containerd[1609]: time="2025-01-29T12:07:52.785120683Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:07:53.427722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount565623955.mount: Deactivated successfully. Jan 29 12:07:54.529233 containerd[1609]: time="2025-01-29T12:07:54.529107674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:54.530864 containerd[1609]: time="2025-01-29T12:07:54.530446287Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 29 12:07:54.532226 containerd[1609]: time="2025-01-29T12:07:54.532163763Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:54.538849 containerd[1609]: time="2025-01-29T12:07:54.537861149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:54.540170 containerd[1609]: time="2025-01-29T12:07:54.540132199Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.754944069s" Jan 29 12:07:54.540212 containerd[1609]: time="2025-01-29T12:07:54.540173726Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 12:07:54.563187 containerd[1609]: time="2025-01-29T12:07:54.563148912Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:07:54.784534 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 12:07:54.792171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:07:55.057013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:07:55.061155 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:07:55.327016 kubelet[2197]: E0129 12:07:55.326770 2197 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:07:55.331669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:07:55.332887 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:07:55.496003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount318535284.mount: Deactivated successfully. Jan 29 12:07:55.504735 containerd[1609]: time="2025-01-29T12:07:55.504615290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:55.506802 containerd[1609]: time="2025-01-29T12:07:55.506678626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 29 12:07:55.509874 containerd[1609]: time="2025-01-29T12:07:55.509111730Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:55.516213 containerd[1609]: time="2025-01-29T12:07:55.516121305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:55.518476 containerd[1609]: time="2025-01-29T12:07:55.518421167Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 955.227652ms" Jan 29 12:07:55.518688 containerd[1609]: time="2025-01-29T12:07:55.518645320Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 12:07:55.558487 containerd[1609]: time="2025-01-29T12:07:55.558456298Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:07:56.261986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3140587909.mount: Deactivated successfully. Jan 29 12:07:59.638031 containerd[1609]: time="2025-01-29T12:07:59.637941998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:59.639773 containerd[1609]: time="2025-01-29T12:07:59.639493117Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 29 12:07:59.641123 containerd[1609]: time="2025-01-29T12:07:59.641094941Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:59.644960 containerd[1609]: time="2025-01-29T12:07:59.644896295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:59.646323 containerd[1609]: time="2025-01-29T12:07:59.646283776Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.087386256s" Jan 29 12:07:59.646380 containerd[1609]: time="2025-01-29T12:07:59.646322039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 12:08:04.134946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:08:04.147354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:08:04.180316 systemd[1]: Reloading requested from client PID 2323 ('systemctl') (unit session-7.scope)... Jan 29 12:08:04.180352 systemd[1]: Reloading... Jan 29 12:08:04.268894 zram_generator::config[2362]: No configuration found. Jan 29 12:08:04.420803 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:08:04.495018 systemd[1]: Reloading finished in 314 ms. Jan 29 12:08:04.536711 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:08:04.536790 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:08:04.537059 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:08:04.546493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:08:04.656384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:08:04.662344 (kubelet)[2438]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:08:04.858790 kubelet[2438]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:08:04.858790 kubelet[2438]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:08:04.858790 kubelet[2438]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:08:04.858790 kubelet[2438]: I0129 12:08:04.735879 2438 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:08:05.332929 update_engine[1589]: I20250129 12:08:05.332883 1589 update_attempter.cc:509] Updating boot flags... Jan 29 12:08:05.397873 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2452) Jan 29 12:08:05.463883 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2454) Jan 29 12:08:05.593342 kubelet[2438]: I0129 12:08:05.593230 2438 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:08:05.593342 kubelet[2438]: I0129 12:08:05.593287 2438 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:08:05.594919 kubelet[2438]: I0129 12:08:05.594889 2438 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:08:05.617345 kubelet[2438]: I0129 12:08:05.617312 2438 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:08:05.621657 kubelet[2438]: E0129 12:08:05.621624 2438 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:05.636650 kubelet[2438]: I0129 12:08:05.636614 2438 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:08:05.637489 kubelet[2438]: I0129 12:08:05.637417 2438 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:08:05.637962 kubelet[2438]: I0129 12:08:05.637488 2438 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-f-33fdf09c6c.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:08:05.639463 kubelet[2438]: I0129 12:08:05.639364 2438 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:08:05.639463 kubelet[2438]: I0129 12:08:05.639416 2438 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:08:05.639700 kubelet[2438]: I0129 12:08:05.639642 2438 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:08:05.641584 kubelet[2438]: I0129 12:08:05.641522 2438 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:08:05.641584 kubelet[2438]: I0129 12:08:05.641582 2438 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:08:05.641727 kubelet[2438]: I0129 12:08:05.641622 2438 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:08:05.641727 kubelet[2438]: I0129 12:08:05.641654 2438 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:08:05.651686 kubelet[2438]: W0129 12:08:05.651156 2438 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:05.651686 kubelet[2438]: E0129 12:08:05.651295 2438 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:05.651686 kubelet[2438]: W0129 12:08:05.651437 2438 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-f-33fdf09c6c.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:05.651686 kubelet[2438]: E0129 12:08:05.651512 2438 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-f-33fdf09c6c.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:05.652392 kubelet[2438]: I0129 12:08:05.652208 2438 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 12:08:05.656426 kubelet[2438]: I0129 12:08:05.656378 2438 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:08:05.656505 kubelet[2438]: W0129 12:08:05.656481 2438 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:08:05.658357 kubelet[2438]: I0129 12:08:05.658164 2438 server.go:1264] "Started kubelet" Jan 29 12:08:05.659329 kubelet[2438]: I0129 12:08:05.659255 2438 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:08:05.661298 kubelet[2438]: I0129 12:08:05.660804 2438 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:08:05.669457 kubelet[2438]: I0129 12:08:05.668577 2438 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:08:05.669457 kubelet[2438]: I0129 12:08:05.669118 2438 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:08:05.672264 kubelet[2438]: I0129 12:08:05.672220 2438 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:08:05.675887 kubelet[2438]: E0129 12:08:05.673666 2438 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.127:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.127:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-f-33fdf09c6c.novalocal.181f287a011d6649 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-f-33fdf09c6c.novalocal,UID:ci-4152-2-0-f-33fdf09c6c.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-f-33fdf09c6c.novalocal,},FirstTimestamp:2025-01-29 12:08:05.658125897 +0000 UTC m=+0.991981155,LastTimestamp:2025-01-29 12:08:05.658125897 +0000 UTC m=+0.991981155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-f-33fdf09c6c.novalocal,}" Jan 29 12:08:05.686054 kubelet[2438]: I0129 12:08:05.686014 2438 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:08:05.689129 kubelet[2438]: I0129 12:08:05.689089 2438 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:08:05.689489 kubelet[2438]: E0129 12:08:05.689442 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-f-33fdf09c6c.novalocal?timeout=10s\": dial tcp 172.24.4.127:6443: connect: connection refused" interval="200ms" Jan 29 12:08:05.689699 kubelet[2438]: I0129 12:08:05.689665 2438 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:08:05.689825 kubelet[2438]: I0129 12:08:05.689737 2438 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:08:05.690141 kubelet[2438]: I0129 12:08:05.690116 2438 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:08:05.691008 kubelet[2438]: W0129 12:08:05.690928 2438 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:05.691223 kubelet[2438]: E0129 12:08:05.691194 2438 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:05.692611 kubelet[2438]: E0129 12:08:05.692574 2438 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:08:05.695234 kubelet[2438]: I0129 12:08:05.695202 2438 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:08:05.698334 kubelet[2438]: I0129 12:08:05.698289 2438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:08:05.699259 kubelet[2438]: I0129 12:08:05.699221 2438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:08:05.699259 kubelet[2438]: I0129 12:08:05.699249 2438 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:08:05.699259 kubelet[2438]: I0129 12:08:05.699268 2438 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:08:05.699481 kubelet[2438]: E0129 12:08:05.699305 2438 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:08:05.705020 kubelet[2438]: W0129 12:08:05.704967 2438 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:05.705020 kubelet[2438]: E0129 12:08:05.705018 2438 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:05.726560 kubelet[2438]: I0129 12:08:05.726339 2438 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:08:05.726560 kubelet[2438]: I0129 12:08:05.726352 2438 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:08:05.726560 kubelet[2438]: I0129 12:08:05.726366 2438 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:08:05.730731 kubelet[2438]: I0129 12:08:05.730703 2438 policy_none.go:49] "None policy: Start" Jan 29 12:08:05.731281 kubelet[2438]: I0129 12:08:05.731264 2438 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:08:05.731350 kubelet[2438]: I0129 12:08:05.731285 2438 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:08:05.737798 kubelet[2438]: I0129 12:08:05.737773 2438 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:08:05.739134 kubelet[2438]: I0129 12:08:05.737933 2438 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:08:05.739134 kubelet[2438]: I0129 12:08:05.738028 2438 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:08:05.740276 kubelet[2438]: E0129 12:08:05.740249 2438 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-0-f-33fdf09c6c.novalocal\" not found" Jan 29 12:08:05.789403 kubelet[2438]: I0129 12:08:05.788888 2438 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.789403 kubelet[2438]: E0129 12:08:05.789340 2438 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.127:6443/api/v1/nodes\": dial tcp 172.24.4.127:6443: connect: connection refused" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.799939 kubelet[2438]: I0129 12:08:05.799878 2438 topology_manager.go:215] "Topology Admit Handler" podUID="34152a5fd5d9800d7b7111f9b2ff99d2" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.802130 kubelet[2438]: I0129 12:08:05.802083 2438 topology_manager.go:215] "Topology Admit Handler" podUID="64f03a0a605046374c28985008559465" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.806782 kubelet[2438]: I0129 12:08:05.806583 2438 topology_manager.go:215] "Topology Admit Handler" podUID="e3a922e66af8968808bb46ec41e3c0d8" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.890827 kubelet[2438]: E0129 12:08:05.890420 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-f-33fdf09c6c.novalocal?timeout=10s\": dial tcp 172.24.4.127:6443: connect: connection refused" interval="400ms" Jan 29 12:08:05.992412 kubelet[2438]: I0129 12:08:05.991526 2438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64f03a0a605046374c28985008559465-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"64f03a0a605046374c28985008559465\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.992412 kubelet[2438]: I0129 12:08:05.991616 2438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34152a5fd5d9800d7b7111f9b2ff99d2-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"34152a5fd5d9800d7b7111f9b2ff99d2\") " pod="kube-system/kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.992412 kubelet[2438]: I0129 12:08:05.991668 2438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34152a5fd5d9800d7b7111f9b2ff99d2-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"34152a5fd5d9800d7b7111f9b2ff99d2\") " pod="kube-system/kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.992412 kubelet[2438]: I0129 12:08:05.991713 2438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64f03a0a605046374c28985008559465-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"64f03a0a605046374c28985008559465\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.993045 kubelet[2438]: I0129 12:08:05.991761 2438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64f03a0a605046374c28985008559465-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"64f03a0a605046374c28985008559465\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.993045 kubelet[2438]: I0129 12:08:05.991806 2438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34152a5fd5d9800d7b7111f9b2ff99d2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"34152a5fd5d9800d7b7111f9b2ff99d2\") " pod="kube-system/kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.993045 kubelet[2438]: I0129 12:08:05.991886 2438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64f03a0a605046374c28985008559465-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"64f03a0a605046374c28985008559465\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.993045 kubelet[2438]: I0129 12:08:05.991934 2438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/64f03a0a605046374c28985008559465-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"64f03a0a605046374c28985008559465\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.993476 kubelet[2438]: I0129 12:08:05.991982 2438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3a922e66af8968808bb46ec41e3c0d8-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"e3a922e66af8968808bb46ec41e3c0d8\") " pod="kube-system/kube-scheduler-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.994431 kubelet[2438]: I0129 12:08:05.993944 2438 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:05.994953 kubelet[2438]: E0129 12:08:05.994819 2438 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.127:6443/api/v1/nodes\": dial tcp 172.24.4.127:6443: connect: connection refused" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:06.116866 containerd[1609]: time="2025-01-29T12:08:06.116745546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal,Uid:34152a5fd5d9800d7b7111f9b2ff99d2,Namespace:kube-system,Attempt:0,}" Jan 29 12:08:06.123863 containerd[1609]: time="2025-01-29T12:08:06.123762815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal,Uid:64f03a0a605046374c28985008559465,Namespace:kube-system,Attempt:0,}" Jan 29 12:08:06.125278 containerd[1609]: time="2025-01-29T12:08:06.124950528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-f-33fdf09c6c.novalocal,Uid:e3a922e66af8968808bb46ec41e3c0d8,Namespace:kube-system,Attempt:0,}" Jan 29 12:08:06.291291 kubelet[2438]: E0129 12:08:06.291191 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-f-33fdf09c6c.novalocal?timeout=10s\": dial tcp 172.24.4.127:6443: connect: connection refused" interval="800ms" Jan 29 12:08:06.398377 kubelet[2438]: I0129 12:08:06.398250 2438 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:06.399085 kubelet[2438]: E0129 12:08:06.398827 2438 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.127:6443/api/v1/nodes\": dial tcp 172.24.4.127:6443: connect: connection refused" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:06.708530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1454391277.mount: Deactivated successfully. Jan 29 12:08:06.719064 containerd[1609]: time="2025-01-29T12:08:06.719002954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:08:06.726192 containerd[1609]: time="2025-01-29T12:08:06.726045160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 12:08:06.727905 containerd[1609]: time="2025-01-29T12:08:06.727531063Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:08:06.729881 containerd[1609]: time="2025-01-29T12:08:06.729710560Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:08:06.736214 containerd[1609]: time="2025-01-29T12:08:06.736012244Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:08:06.737429 containerd[1609]: time="2025-01-29T12:08:06.737298171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:08:06.738171 containerd[1609]: time="2025-01-29T12:08:06.737903909Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:08:06.742531 kubelet[2438]: W0129 12:08:06.742442 2438 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:06.742531 kubelet[2438]: E0129 12:08:06.742528 2438 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:06.749592 containerd[1609]: time="2025-01-29T12:08:06.749454951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:08:06.751826 containerd[1609]: time="2025-01-29T12:08:06.751350544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.370567ms" Jan 29 12:08:06.757676 containerd[1609]: time="2025-01-29T12:08:06.757572488Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 633.570894ms" Jan 29 12:08:06.759234 containerd[1609]: time="2025-01-29T12:08:06.759146447Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.017133ms" Jan 29 12:08:06.932997 kubelet[2438]: W0129 12:08:06.932919 2438 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:06.932997 kubelet[2438]: E0129 12:08:06.932987 2438 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:06.935010 containerd[1609]: time="2025-01-29T12:08:06.928608529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:08:06.935010 containerd[1609]: time="2025-01-29T12:08:06.932101615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:08:06.935010 containerd[1609]: time="2025-01-29T12:08:06.932136872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:06.935010 containerd[1609]: time="2025-01-29T12:08:06.932316449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:06.940552 containerd[1609]: time="2025-01-29T12:08:06.940080763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:08:06.940552 containerd[1609]: time="2025-01-29T12:08:06.940189928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:08:06.940552 containerd[1609]: time="2025-01-29T12:08:06.940227819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:06.940552 containerd[1609]: time="2025-01-29T12:08:06.940396817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:06.944444 containerd[1609]: time="2025-01-29T12:08:06.943731876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:08:06.944444 containerd[1609]: time="2025-01-29T12:08:06.943795876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:08:06.944444 containerd[1609]: time="2025-01-29T12:08:06.943814481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:06.944444 containerd[1609]: time="2025-01-29T12:08:06.944033943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:06.978723 kubelet[2438]: W0129 12:08:06.976921 2438 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:06.978723 kubelet[2438]: E0129 12:08:06.977899 2438 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:07.026950 containerd[1609]: time="2025-01-29T12:08:07.026913858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal,Uid:34152a5fd5d9800d7b7111f9b2ff99d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdf6466261affeaabf34a0298f9cc41222a6b46b06133b92e59b839c0f3b89d5\"" Jan 29 12:08:07.034130 containerd[1609]: time="2025-01-29T12:08:07.034085967Z" level=info msg="CreateContainer within sandbox \"cdf6466261affeaabf34a0298f9cc41222a6b46b06133b92e59b839c0f3b89d5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:08:07.044160 kubelet[2438]: W0129 12:08:07.044038 2438 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-f-33fdf09c6c.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:07.044392 kubelet[2438]: E0129 12:08:07.044338 2438 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-f-33fdf09c6c.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jan 29 12:08:07.054768 containerd[1609]: time="2025-01-29T12:08:07.054543093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-f-33fdf09c6c.novalocal,Uid:e3a922e66af8968808bb46ec41e3c0d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b8e5c95e4f52d18b34ac8defee23f74c74f5879b80d33753012cd6926788654\"" Jan 29 12:08:07.059414 containerd[1609]: time="2025-01-29T12:08:07.059360887Z" level=info msg="CreateContainer within sandbox \"9b8e5c95e4f52d18b34ac8defee23f74c74f5879b80d33753012cd6926788654\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:08:07.060860 containerd[1609]: time="2025-01-29T12:08:07.060782820Z" level=info msg="CreateContainer within sandbox \"cdf6466261affeaabf34a0298f9cc41222a6b46b06133b92e59b839c0f3b89d5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"569a4489e5175f7af69f578d3a24dd0dd0e29a5db045519c2c3b94d2c7ada5c6\"" Jan 29 12:08:07.062855 containerd[1609]: time="2025-01-29T12:08:07.061679605Z" level=info msg="StartContainer for \"569a4489e5175f7af69f578d3a24dd0dd0e29a5db045519c2c3b94d2c7ada5c6\"" Jan 29 12:08:07.067000 containerd[1609]: time="2025-01-29T12:08:07.066968575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal,Uid:64f03a0a605046374c28985008559465,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4964d982198084db456c71aa7735eb7d8db76537039210dd097203e39aefca8\"" Jan 29 12:08:07.070742 containerd[1609]: time="2025-01-29T12:08:07.070716699Z" level=info msg="CreateContainer within sandbox \"b4964d982198084db456c71aa7735eb7d8db76537039210dd097203e39aefca8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:08:07.090090 containerd[1609]: time="2025-01-29T12:08:07.090053039Z" level=info msg="CreateContainer within sandbox \"9b8e5c95e4f52d18b34ac8defee23f74c74f5879b80d33753012cd6926788654\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3f3701962fc5f8417a0c26cf3bcb42e9033ebd80f36e6bfb6cdcd188666e0509\"" Jan 29 12:08:07.091497 containerd[1609]: time="2025-01-29T12:08:07.091077184Z" level=info msg="StartContainer for \"3f3701962fc5f8417a0c26cf3bcb42e9033ebd80f36e6bfb6cdcd188666e0509\"" Jan 29 12:08:07.092797 kubelet[2438]: E0129 12:08:07.092609 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-f-33fdf09c6c.novalocal?timeout=10s\": dial tcp 172.24.4.127:6443: connect: connection refused" interval="1.6s" Jan 29 12:08:07.110716 containerd[1609]: time="2025-01-29T12:08:07.110663343Z" level=info msg="CreateContainer within sandbox \"b4964d982198084db456c71aa7735eb7d8db76537039210dd097203e39aefca8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1578652db09381ff1941adcb2ee22fd99c2fde9e649c967482f9a7a8f572cb5a\"" Jan 29 12:08:07.112150 containerd[1609]: time="2025-01-29T12:08:07.112132043Z" level=info msg="StartContainer for \"1578652db09381ff1941adcb2ee22fd99c2fde9e649c967482f9a7a8f572cb5a\"" Jan 29 12:08:07.163259 containerd[1609]: time="2025-01-29T12:08:07.163108125Z" level=info msg="StartContainer for \"569a4489e5175f7af69f578d3a24dd0dd0e29a5db045519c2c3b94d2c7ada5c6\" returns successfully" Jan 29 12:08:07.203241 kubelet[2438]: I0129 12:08:07.203216 2438 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:07.204990 kubelet[2438]: E0129 12:08:07.204588 2438 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.127:6443/api/v1/nodes\": dial tcp 172.24.4.127:6443: connect: connection refused" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:07.217651 containerd[1609]: time="2025-01-29T12:08:07.217589525Z" level=info msg="StartContainer for \"3f3701962fc5f8417a0c26cf3bcb42e9033ebd80f36e6bfb6cdcd188666e0509\" returns successfully" Jan 29 12:08:07.255032 containerd[1609]: time="2025-01-29T12:08:07.254908180Z" level=info msg="StartContainer for \"1578652db09381ff1941adcb2ee22fd99c2fde9e649c967482f9a7a8f572cb5a\" returns successfully" Jan 29 12:08:08.806821 kubelet[2438]: I0129 12:08:08.806789 2438 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:09.226577 kubelet[2438]: E0129 12:08:09.226465 2438 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-0-f-33fdf09c6c.novalocal\" not found" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:09.299848 kubelet[2438]: E0129 12:08:09.298738 2438 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152-2-0-f-33fdf09c6c.novalocal.181f287a011d6649 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-f-33fdf09c6c.novalocal,UID:ci-4152-2-0-f-33fdf09c6c.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-f-33fdf09c6c.novalocal,},FirstTimestamp:2025-01-29 12:08:05.658125897 +0000 UTC m=+0.991981155,LastTimestamp:2025-01-29 12:08:05.658125897 +0000 UTC m=+0.991981155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-f-33fdf09c6c.novalocal,}" Jan 29 12:08:09.323077 kubelet[2438]: I0129 12:08:09.323032 2438 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:09.355174 kubelet[2438]: E0129 12:08:09.355058 2438 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152-2-0-f-33fdf09c6c.novalocal.181f287a032aae4c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-f-33fdf09c6c.novalocal,UID:ci-4152-2-0-f-33fdf09c6c.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-f-33fdf09c6c.novalocal,},FirstTimestamp:2025-01-29 12:08:05.692550732 +0000 UTC m=+1.026406010,LastTimestamp:2025-01-29 12:08:05.692550732 +0000 UTC m=+1.026406010,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-f-33fdf09c6c.novalocal,}" Jan 29 12:08:09.653332 kubelet[2438]: I0129 12:08:09.653245 2438 apiserver.go:52] "Watching apiserver" Jan 29 12:08:09.689815 kubelet[2438]: I0129 12:08:09.689771 2438 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:08:09.751267 kubelet[2438]: E0129 12:08:09.751193 2438 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:11.847754 systemd[1]: Reloading requested from client PID 2726 ('systemctl') (unit session-7.scope)... Jan 29 12:08:11.847791 systemd[1]: Reloading... Jan 29 12:08:11.954951 zram_generator::config[2765]: No configuration found. Jan 29 12:08:12.101336 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:08:12.186181 systemd[1]: Reloading finished in 337 ms. Jan 29 12:08:12.220672 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:08:12.231134 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:08:12.231526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:08:12.238116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:08:12.492182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:08:12.508528 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:08:12.646105 kubelet[2839]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:08:12.647298 kubelet[2839]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:08:12.647298 kubelet[2839]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:08:12.647298 kubelet[2839]: I0129 12:08:12.646786 2839 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:08:12.652651 kubelet[2839]: I0129 12:08:12.652622 2839 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:08:12.652651 kubelet[2839]: I0129 12:08:12.652643 2839 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:08:12.652856 kubelet[2839]: I0129 12:08:12.652821 2839 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:08:12.654258 kubelet[2839]: I0129 12:08:12.654231 2839 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:08:12.655490 kubelet[2839]: I0129 12:08:12.655309 2839 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:08:12.663666 kubelet[2839]: I0129 12:08:12.663643 2839 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:08:12.664081 kubelet[2839]: I0129 12:08:12.664053 2839 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:08:12.664249 kubelet[2839]: I0129 12:08:12.664085 2839 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-f-33fdf09c6c.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:08:12.664352 kubelet[2839]: I0129 12:08:12.664260 2839 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:08:12.664352 kubelet[2839]: I0129 12:08:12.664272 2839 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:08:12.664352 kubelet[2839]: I0129 12:08:12.664311 2839 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:08:12.664557 kubelet[2839]: I0129 12:08:12.664393 2839 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:08:12.664557 kubelet[2839]: I0129 12:08:12.664406 2839 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:08:12.664557 kubelet[2839]: I0129 12:08:12.664426 2839 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:08:12.664557 kubelet[2839]: I0129 12:08:12.664437 2839 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:08:12.665949 kubelet[2839]: I0129 12:08:12.665931 2839 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 12:08:12.666107 kubelet[2839]: I0129 12:08:12.666070 2839 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:08:12.666467 kubelet[2839]: I0129 12:08:12.666451 2839 server.go:1264] "Started kubelet" Jan 29 12:08:12.670464 kubelet[2839]: I0129 12:08:12.670444 2839 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:08:12.675333 kubelet[2839]: I0129 12:08:12.675300 2839 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:08:12.676938 kubelet[2839]: I0129 12:08:12.676364 2839 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:08:12.677282 kubelet[2839]: I0129 12:08:12.677242 2839 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:08:12.677435 kubelet[2839]: I0129 12:08:12.677417 2839 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:08:12.679020 kubelet[2839]: I0129 12:08:12.678815 2839 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:08:12.681627 kubelet[2839]: I0129 12:08:12.681591 2839 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:08:12.681720 kubelet[2839]: I0129 12:08:12.681706 2839 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:08:12.683883 kubelet[2839]: I0129 12:08:12.683595 2839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:08:12.684570 kubelet[2839]: I0129 12:08:12.684556 2839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:08:12.684653 kubelet[2839]: I0129 12:08:12.684644 2839 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:08:12.684716 kubelet[2839]: I0129 12:08:12.684708 2839 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:08:12.684799 kubelet[2839]: E0129 12:08:12.684784 2839 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:08:12.701516 kubelet[2839]: I0129 12:08:12.701116 2839 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:08:12.701516 kubelet[2839]: I0129 12:08:12.701247 2839 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:08:12.702591 kubelet[2839]: E0129 12:08:12.702542 2839 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:08:12.704185 kubelet[2839]: I0129 12:08:12.704170 2839 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:08:12.755477 kubelet[2839]: I0129 12:08:12.755365 2839 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:08:12.755477 kubelet[2839]: I0129 12:08:12.755384 2839 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:08:12.755477 kubelet[2839]: I0129 12:08:12.755400 2839 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:08:12.756496 kubelet[2839]: I0129 12:08:12.756231 2839 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:08:12.756496 kubelet[2839]: I0129 12:08:12.756249 2839 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:08:12.756496 kubelet[2839]: I0129 12:08:12.756269 2839 policy_none.go:49] "None policy: Start" Jan 29 12:08:12.757566 kubelet[2839]: I0129 12:08:12.757519 2839 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:08:12.757566 kubelet[2839]: I0129 12:08:12.757558 2839 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:08:12.757703 kubelet[2839]: I0129 12:08:12.757688 2839 state_mem.go:75] "Updated machine memory state" Jan 29 12:08:12.760399 kubelet[2839]: I0129 12:08:12.759156 2839 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:08:12.760399 kubelet[2839]: I0129 12:08:12.759330 2839 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:08:12.760399 kubelet[2839]: I0129 12:08:12.759443 2839 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:08:12.782228 kubelet[2839]: I0129 12:08:12.782075 2839 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.784986 kubelet[2839]: I0129 12:08:12.784956 2839 topology_manager.go:215] "Topology Admit Handler" podUID="34152a5fd5d9800d7b7111f9b2ff99d2" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.785922 kubelet[2839]: I0129 12:08:12.785116 2839 topology_manager.go:215] "Topology Admit Handler" podUID="64f03a0a605046374c28985008559465" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.785922 kubelet[2839]: I0129 12:08:12.785918 2839 topology_manager.go:215] "Topology Admit Handler" podUID="e3a922e66af8968808bb46ec41e3c0d8" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.792435 kubelet[2839]: W0129 12:08:12.792394 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:08:12.796649 kubelet[2839]: W0129 12:08:12.796627 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:08:12.797175 kubelet[2839]: W0129 12:08:12.796935 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:08:12.799499 kubelet[2839]: I0129 12:08:12.799417 2839 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.799499 kubelet[2839]: I0129 12:08:12.799504 2839 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.982404 kubelet[2839]: I0129 12:08:12.982191 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34152a5fd5d9800d7b7111f9b2ff99d2-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"34152a5fd5d9800d7b7111f9b2ff99d2\") " pod="kube-system/kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.982404 kubelet[2839]: I0129 12:08:12.982273 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34152a5fd5d9800d7b7111f9b2ff99d2-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"34152a5fd5d9800d7b7111f9b2ff99d2\") " pod="kube-system/kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.982404 kubelet[2839]: I0129 12:08:12.982331 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64f03a0a605046374c28985008559465-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"64f03a0a605046374c28985008559465\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.982404 kubelet[2839]: I0129 12:08:12.982384 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/64f03a0a605046374c28985008559465-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"64f03a0a605046374c28985008559465\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.982404 kubelet[2839]: I0129 12:08:12.982430 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64f03a0a605046374c28985008559465-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"64f03a0a605046374c28985008559465\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.983051 kubelet[2839]: I0129 12:08:12.982474 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3a922e66af8968808bb46ec41e3c0d8-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"e3a922e66af8968808bb46ec41e3c0d8\") " pod="kube-system/kube-scheduler-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.983051 kubelet[2839]: I0129 12:08:12.982520 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34152a5fd5d9800d7b7111f9b2ff99d2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"34152a5fd5d9800d7b7111f9b2ff99d2\") " pod="kube-system/kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.983051 kubelet[2839]: I0129 12:08:12.982627 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64f03a0a605046374c28985008559465-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"64f03a0a605046374c28985008559465\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:12.983051 kubelet[2839]: I0129 12:08:12.982673 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64f03a0a605046374c28985008559465-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal\" (UID: \"64f03a0a605046374c28985008559465\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:13.666143 kubelet[2839]: I0129 12:08:13.665215 2839 apiserver.go:52] "Watching apiserver" Jan 29 12:08:13.682646 kubelet[2839]: I0129 12:08:13.682449 2839 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:08:13.757581 kubelet[2839]: W0129 12:08:13.754433 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:08:13.757581 kubelet[2839]: E0129 12:08:13.754563 2839 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal" Jan 29 12:08:13.805864 kubelet[2839]: I0129 12:08:13.805793 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-0-f-33fdf09c6c.novalocal" podStartSLOduration=1.805776206 podStartE2EDuration="1.805776206s" podCreationTimestamp="2025-01-29 12:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:08:13.795283069 +0000 UTC m=+1.277970901" watchObservedRunningTime="2025-01-29 12:08:13.805776206 +0000 UTC m=+1.288463968" Jan 29 12:08:13.818457 kubelet[2839]: I0129 12:08:13.818411 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-0-f-33fdf09c6c.novalocal" podStartSLOduration=1.818375879 podStartE2EDuration="1.818375879s" podCreationTimestamp="2025-01-29 12:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:08:13.806474648 +0000 UTC m=+1.289162400" watchObservedRunningTime="2025-01-29 12:08:13.818375879 +0000 UTC m=+1.301063641" Jan 29 12:08:13.828432 kubelet[2839]: I0129 12:08:13.828313 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-0-f-33fdf09c6c.novalocal" podStartSLOduration=1.8282955699999999 podStartE2EDuration="1.82829557s" podCreationTimestamp="2025-01-29 12:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:08:13.818771763 +0000 UTC m=+1.301459525" watchObservedRunningTime="2025-01-29 12:08:13.82829557 +0000 UTC m=+1.310983322" Jan 29 12:08:14.493319 sudo[1818]: pam_unix(sudo:session): session closed for user root Jan 29 12:08:14.698649 sshd[1817]: Connection closed by 172.24.4.1 port 53648 Jan 29 12:08:14.698868 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jan 29 12:08:14.705818 systemd[1]: sshd@4-172.24.4.127:22-172.24.4.1:53648.service: Deactivated successfully. Jan 29 12:08:14.717291 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:08:14.717638 systemd-logind[1587]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:08:14.721897 systemd-logind[1587]: Removed session 7. Jan 29 12:08:27.296806 kubelet[2839]: I0129 12:08:27.296629 2839 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:08:27.298675 kubelet[2839]: I0129 12:08:27.297906 2839 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:08:27.298798 containerd[1609]: time="2025-01-29T12:08:27.297348447Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:08:28.087102 kubelet[2839]: I0129 12:08:28.086909 2839 topology_manager.go:215] "Topology Admit Handler" podUID="38709162-ddcc-443c-bf72-f94d3681588d" podNamespace="kube-system" podName="kube-proxy-cjmdq" Jan 29 12:08:28.110634 kubelet[2839]: I0129 12:08:28.110542 2839 topology_manager.go:215] "Topology Admit Handler" podUID="9ea0546a-f6ce-46d8-9c76-c48d912dd82e" podNamespace="kube-flannel" podName="kube-flannel-ds-48h8r" Jan 29 12:08:28.284603 kubelet[2839]: I0129 12:08:28.284359 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38709162-ddcc-443c-bf72-f94d3681588d-lib-modules\") pod \"kube-proxy-cjmdq\" (UID: \"38709162-ddcc-443c-bf72-f94d3681588d\") " pod="kube-system/kube-proxy-cjmdq" Jan 29 12:08:28.284603 kubelet[2839]: I0129 12:08:28.284437 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/9ea0546a-f6ce-46d8-9c76-c48d912dd82e-flannel-cfg\") pod \"kube-flannel-ds-48h8r\" (UID: \"9ea0546a-f6ce-46d8-9c76-c48d912dd82e\") " pod="kube-flannel/kube-flannel-ds-48h8r" Jan 29 12:08:28.284603 kubelet[2839]: I0129 12:08:28.284492 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/38709162-ddcc-443c-bf72-f94d3681588d-kube-proxy\") pod \"kube-proxy-cjmdq\" (UID: \"38709162-ddcc-443c-bf72-f94d3681588d\") " pod="kube-system/kube-proxy-cjmdq" Jan 29 12:08:28.285204 kubelet[2839]: I0129 12:08:28.284536 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9ea0546a-f6ce-46d8-9c76-c48d912dd82e-run\") pod \"kube-flannel-ds-48h8r\" (UID: \"9ea0546a-f6ce-46d8-9c76-c48d912dd82e\") " pod="kube-flannel/kube-flannel-ds-48h8r" Jan 29 12:08:28.285805 kubelet[2839]: I0129 12:08:28.285288 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ea0546a-f6ce-46d8-9c76-c48d912dd82e-xtables-lock\") pod \"kube-flannel-ds-48h8r\" (UID: \"9ea0546a-f6ce-46d8-9c76-c48d912dd82e\") " pod="kube-flannel/kube-flannel-ds-48h8r" Jan 29 12:08:28.285805 kubelet[2839]: I0129 12:08:28.285360 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38709162-ddcc-443c-bf72-f94d3681588d-xtables-lock\") pod \"kube-proxy-cjmdq\" (UID: \"38709162-ddcc-443c-bf72-f94d3681588d\") " pod="kube-system/kube-proxy-cjmdq" Jan 29 12:08:28.285805 kubelet[2839]: I0129 12:08:28.285406 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/9ea0546a-f6ce-46d8-9c76-c48d912dd82e-cni\") pod \"kube-flannel-ds-48h8r\" (UID: \"9ea0546a-f6ce-46d8-9c76-c48d912dd82e\") " pod="kube-flannel/kube-flannel-ds-48h8r" Jan 29 12:08:28.285805 kubelet[2839]: I0129 12:08:28.285451 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk6wc\" (UniqueName: \"kubernetes.io/projected/9ea0546a-f6ce-46d8-9c76-c48d912dd82e-kube-api-access-rk6wc\") pod \"kube-flannel-ds-48h8r\" (UID: \"9ea0546a-f6ce-46d8-9c76-c48d912dd82e\") " pod="kube-flannel/kube-flannel-ds-48h8r" Jan 29 12:08:28.285805 kubelet[2839]: I0129 12:08:28.285579 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwv4\" (UniqueName: \"kubernetes.io/projected/38709162-ddcc-443c-bf72-f94d3681588d-kube-api-access-njwv4\") pod \"kube-proxy-cjmdq\" (UID: \"38709162-ddcc-443c-bf72-f94d3681588d\") " pod="kube-system/kube-proxy-cjmdq" Jan 29 12:08:28.286189 kubelet[2839]: I0129 12:08:28.285686 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/9ea0546a-f6ce-46d8-9c76-c48d912dd82e-cni-plugin\") pod \"kube-flannel-ds-48h8r\" (UID: \"9ea0546a-f6ce-46d8-9c76-c48d912dd82e\") " pod="kube-flannel/kube-flannel-ds-48h8r" Jan 29 12:08:28.418169 containerd[1609]: time="2025-01-29T12:08:28.416235516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cjmdq,Uid:38709162-ddcc-443c-bf72-f94d3681588d,Namespace:kube-system,Attempt:0,}" Jan 29 12:08:28.481716 containerd[1609]: time="2025-01-29T12:08:28.481602356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:08:28.481716 containerd[1609]: time="2025-01-29T12:08:28.481689650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:08:28.482006 containerd[1609]: time="2025-01-29T12:08:28.481715869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:28.482481 containerd[1609]: time="2025-01-29T12:08:28.482432805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:28.531903 containerd[1609]: time="2025-01-29T12:08:28.531802857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cjmdq,Uid:38709162-ddcc-443c-bf72-f94d3681588d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fda0363960019cbb7db067823511cd1980edd59006127d502dbb918946f66867\"" Jan 29 12:08:28.536966 containerd[1609]: time="2025-01-29T12:08:28.536911203Z" level=info msg="CreateContainer within sandbox \"fda0363960019cbb7db067823511cd1980edd59006127d502dbb918946f66867\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:08:28.559246 containerd[1609]: time="2025-01-29T12:08:28.559169891Z" level=info msg="CreateContainer within sandbox \"fda0363960019cbb7db067823511cd1980edd59006127d502dbb918946f66867\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"051e2f72df4b42a70dca378f2396b4c0ad67ed7db93d9f00d22858ca3dc129c0\"" Jan 29 12:08:28.561940 containerd[1609]: time="2025-01-29T12:08:28.560650319Z" level=info msg="StartContainer for \"051e2f72df4b42a70dca378f2396b4c0ad67ed7db93d9f00d22858ca3dc129c0\"" Jan 29 12:08:28.647530 containerd[1609]: time="2025-01-29T12:08:28.647114097Z" level=info msg="StartContainer for \"051e2f72df4b42a70dca378f2396b4c0ad67ed7db93d9f00d22858ca3dc129c0\" returns successfully" Jan 29 12:08:28.728227 containerd[1609]: time="2025-01-29T12:08:28.728114995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-48h8r,Uid:9ea0546a-f6ce-46d8-9c76-c48d912dd82e,Namespace:kube-flannel,Attempt:0,}" Jan 29 12:08:28.764924 containerd[1609]: time="2025-01-29T12:08:28.764400865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:08:28.765481 containerd[1609]: time="2025-01-29T12:08:28.765070021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:08:28.765481 containerd[1609]: time="2025-01-29T12:08:28.765097012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:28.765481 containerd[1609]: time="2025-01-29T12:08:28.765196298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:28.836111 containerd[1609]: time="2025-01-29T12:08:28.836048621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-48h8r,Uid:9ea0546a-f6ce-46d8-9c76-c48d912dd82e,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"6f2dd95df20e90a2f93d2e3b830ab6d59ea8313b41edc760ac2fba1d3e65f421\"" Jan 29 12:08:28.839189 containerd[1609]: time="2025-01-29T12:08:28.839136446Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 29 12:08:29.420131 systemd[1]: run-containerd-runc-k8s.io-fda0363960019cbb7db067823511cd1980edd59006127d502dbb918946f66867-runc.l7ue3U.mount: Deactivated successfully. Jan 29 12:08:31.513947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812941576.mount: Deactivated successfully. Jan 29 12:08:31.569893 containerd[1609]: time="2025-01-29T12:08:31.569798742Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:08:31.571438 containerd[1609]: time="2025-01-29T12:08:31.571386831Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 29 12:08:31.572663 containerd[1609]: time="2025-01-29T12:08:31.572611579Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:08:31.576655 containerd[1609]: time="2025-01-29T12:08:31.576601846Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:08:31.579256 containerd[1609]: time="2025-01-29T12:08:31.579233373Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.739934963s" Jan 29 12:08:31.579256 containerd[1609]: time="2025-01-29T12:08:31.579284559Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 29 12:08:31.582520 containerd[1609]: time="2025-01-29T12:08:31.582500253Z" level=info msg="CreateContainer within sandbox \"6f2dd95df20e90a2f93d2e3b830ab6d59ea8313b41edc760ac2fba1d3e65f421\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 29 12:08:31.606220 containerd[1609]: time="2025-01-29T12:08:31.606109601Z" level=info msg="CreateContainer within sandbox \"6f2dd95df20e90a2f93d2e3b830ab6d59ea8313b41edc760ac2fba1d3e65f421\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"d6c741c05e6b73753605a71824da76867f0e21c6f7ab76556694380a9717dc6f\"" Jan 29 12:08:31.608103 containerd[1609]: time="2025-01-29T12:08:31.607433345Z" level=info msg="StartContainer for \"d6c741c05e6b73753605a71824da76867f0e21c6f7ab76556694380a9717dc6f\"" Jan 29 12:08:31.688382 containerd[1609]: time="2025-01-29T12:08:31.688263339Z" level=info msg="StartContainer for \"d6c741c05e6b73753605a71824da76867f0e21c6f7ab76556694380a9717dc6f\" returns successfully" Jan 29 12:08:31.792890 containerd[1609]: time="2025-01-29T12:08:31.791580554Z" level=info msg="shim disconnected" id=d6c741c05e6b73753605a71824da76867f0e21c6f7ab76556694380a9717dc6f namespace=k8s.io Jan 29 12:08:31.792890 containerd[1609]: time="2025-01-29T12:08:31.791658149Z" level=warning msg="cleaning up after shim disconnected" id=d6c741c05e6b73753605a71824da76867f0e21c6f7ab76556694380a9717dc6f namespace=k8s.io Jan 29 12:08:31.792890 containerd[1609]: time="2025-01-29T12:08:31.791669130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:08:31.804980 kubelet[2839]: I0129 12:08:31.804582 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cjmdq" podStartSLOduration=3.8045608189999998 podStartE2EDuration="3.804560819s" podCreationTimestamp="2025-01-29 12:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:08:28.786021676 +0000 UTC m=+16.268709448" watchObservedRunningTime="2025-01-29 12:08:31.804560819 +0000 UTC m=+19.287248571" Jan 29 12:08:32.372507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6c741c05e6b73753605a71824da76867f0e21c6f7ab76556694380a9717dc6f-rootfs.mount: Deactivated successfully. Jan 29 12:08:32.791905 containerd[1609]: time="2025-01-29T12:08:32.791170673Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 29 12:08:35.175584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796128648.mount: Deactivated successfully. Jan 29 12:08:36.159688 containerd[1609]: time="2025-01-29T12:08:36.157231358Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:08:36.170633 containerd[1609]: time="2025-01-29T12:08:36.161705452Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866357" Jan 29 12:08:36.170633 containerd[1609]: time="2025-01-29T12:08:36.168927822Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:08:36.179613 containerd[1609]: time="2025-01-29T12:08:36.179170588Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:08:36.182644 containerd[1609]: time="2025-01-29T12:08:36.182568543Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.391345792s" Jan 29 12:08:36.182644 containerd[1609]: time="2025-01-29T12:08:36.182639737Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 29 12:08:36.189934 containerd[1609]: time="2025-01-29T12:08:36.189815689Z" level=info msg="CreateContainer within sandbox \"6f2dd95df20e90a2f93d2e3b830ab6d59ea8313b41edc760ac2fba1d3e65f421\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:08:36.230887 containerd[1609]: time="2025-01-29T12:08:36.227547268Z" level=info msg="CreateContainer within sandbox \"6f2dd95df20e90a2f93d2e3b830ab6d59ea8313b41edc760ac2fba1d3e65f421\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a499c12dca3a884b25061abe25d9f5f32e48ad84fc44d676a10f027acda8c714\"" Jan 29 12:08:36.231371 containerd[1609]: time="2025-01-29T12:08:36.231226430Z" level=info msg="StartContainer for \"a499c12dca3a884b25061abe25d9f5f32e48ad84fc44d676a10f027acda8c714\"" Jan 29 12:08:36.308644 containerd[1609]: time="2025-01-29T12:08:36.308601352Z" level=info msg="StartContainer for \"a499c12dca3a884b25061abe25d9f5f32e48ad84fc44d676a10f027acda8c714\" returns successfully" Jan 29 12:08:36.332852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a499c12dca3a884b25061abe25d9f5f32e48ad84fc44d676a10f027acda8c714-rootfs.mount: Deactivated successfully. Jan 29 12:08:36.366867 kubelet[2839]: I0129 12:08:36.366769 2839 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:08:36.665703 kubelet[2839]: I0129 12:08:36.592542 2839 topology_manager.go:215] "Topology Admit Handler" podUID="0a334140-b150-4d71-8c2d-908b2b66c709" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tc4lh" Jan 29 12:08:36.665703 kubelet[2839]: I0129 12:08:36.605276 2839 topology_manager.go:215] "Topology Admit Handler" podUID="cfbadf87-a842-4ecc-bd80-3e30827e7355" podNamespace="kube-system" podName="coredns-7db6d8ff4d-72vd5" Jan 29 12:08:36.665703 kubelet[2839]: I0129 12:08:36.646146 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxgrb\" (UniqueName: \"kubernetes.io/projected/cfbadf87-a842-4ecc-bd80-3e30827e7355-kube-api-access-lxgrb\") pod \"coredns-7db6d8ff4d-72vd5\" (UID: \"cfbadf87-a842-4ecc-bd80-3e30827e7355\") " pod="kube-system/coredns-7db6d8ff4d-72vd5" Jan 29 12:08:36.665703 kubelet[2839]: I0129 12:08:36.646187 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfbadf87-a842-4ecc-bd80-3e30827e7355-config-volume\") pod \"coredns-7db6d8ff4d-72vd5\" (UID: \"cfbadf87-a842-4ecc-bd80-3e30827e7355\") " pod="kube-system/coredns-7db6d8ff4d-72vd5" Jan 29 12:08:36.665703 kubelet[2839]: I0129 12:08:36.646212 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jm99\" (UniqueName: \"kubernetes.io/projected/0a334140-b150-4d71-8c2d-908b2b66c709-kube-api-access-9jm99\") pod \"coredns-7db6d8ff4d-tc4lh\" (UID: \"0a334140-b150-4d71-8c2d-908b2b66c709\") " pod="kube-system/coredns-7db6d8ff4d-tc4lh" Jan 29 12:08:36.665703 kubelet[2839]: I0129 12:08:36.646233 2839 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a334140-b150-4d71-8c2d-908b2b66c709-config-volume\") pod \"coredns-7db6d8ff4d-tc4lh\" (UID: \"0a334140-b150-4d71-8c2d-908b2b66c709\") " pod="kube-system/coredns-7db6d8ff4d-tc4lh" Jan 29 12:08:36.693876 containerd[1609]: time="2025-01-29T12:08:36.693623679Z" level=info msg="shim disconnected" id=a499c12dca3a884b25061abe25d9f5f32e48ad84fc44d676a10f027acda8c714 namespace=k8s.io Jan 29 12:08:36.694565 containerd[1609]: time="2025-01-29T12:08:36.693815780Z" level=warning msg="cleaning up after shim disconnected" id=a499c12dca3a884b25061abe25d9f5f32e48ad84fc44d676a10f027acda8c714 namespace=k8s.io Jan 29 12:08:36.694565 containerd[1609]: time="2025-01-29T12:08:36.694272297Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:08:36.812890 containerd[1609]: time="2025-01-29T12:08:36.810692912Z" level=info msg="CreateContainer within sandbox \"6f2dd95df20e90a2f93d2e3b830ab6d59ea8313b41edc760ac2fba1d3e65f421\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 29 12:08:36.839946 containerd[1609]: time="2025-01-29T12:08:36.839868288Z" level=info msg="CreateContainer within sandbox \"6f2dd95df20e90a2f93d2e3b830ab6d59ea8313b41edc760ac2fba1d3e65f421\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"c36c2f1861aa030c1d47a7229a6a6d9c1575ec8c1ca3afe4c5bac65cb7734bc4\"" Jan 29 12:08:36.841003 containerd[1609]: time="2025-01-29T12:08:36.840881860Z" level=info msg="StartContainer for \"c36c2f1861aa030c1d47a7229a6a6d9c1575ec8c1ca3afe4c5bac65cb7734bc4\"" Jan 29 12:08:36.936738 containerd[1609]: time="2025-01-29T12:08:36.936502572Z" level=info msg="StartContainer for \"c36c2f1861aa030c1d47a7229a6a6d9c1575ec8c1ca3afe4c5bac65cb7734bc4\" returns successfully" Jan 29 12:08:36.970063 containerd[1609]: time="2025-01-29T12:08:36.969649289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-72vd5,Uid:cfbadf87-a842-4ecc-bd80-3e30827e7355,Namespace:kube-system,Attempt:0,}" Jan 29 12:08:36.970766 containerd[1609]: time="2025-01-29T12:08:36.970483184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tc4lh,Uid:0a334140-b150-4d71-8c2d-908b2b66c709,Namespace:kube-system,Attempt:0,}" Jan 29 12:08:37.035616 containerd[1609]: time="2025-01-29T12:08:37.035525478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tc4lh,Uid:0a334140-b150-4d71-8c2d-908b2b66c709,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5578d36689cb141f1e37dd9e9f4f09ba8ffe5901cd6d69b447de89c79fbbf335\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:08:37.036091 kubelet[2839]: E0129 12:08:37.036014 2839 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5578d36689cb141f1e37dd9e9f4f09ba8ffe5901cd6d69b447de89c79fbbf335\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:08:37.036200 kubelet[2839]: E0129 12:08:37.036146 2839 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5578d36689cb141f1e37dd9e9f4f09ba8ffe5901cd6d69b447de89c79fbbf335\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-tc4lh" Jan 29 12:08:37.036308 kubelet[2839]: E0129 12:08:37.036193 2839 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5578d36689cb141f1e37dd9e9f4f09ba8ffe5901cd6d69b447de89c79fbbf335\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-tc4lh" Jan 29 12:08:37.036308 kubelet[2839]: E0129 12:08:37.036279 2839 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tc4lh_kube-system(0a334140-b150-4d71-8c2d-908b2b66c709)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tc4lh_kube-system(0a334140-b150-4d71-8c2d-908b2b66c709)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5578d36689cb141f1e37dd9e9f4f09ba8ffe5901cd6d69b447de89c79fbbf335\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-tc4lh" podUID="0a334140-b150-4d71-8c2d-908b2b66c709" Jan 29 12:08:37.037141 containerd[1609]: time="2025-01-29T12:08:37.036941955Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-72vd5,Uid:cfbadf87-a842-4ecc-bd80-3e30827e7355,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96e41fb6285a20a7a684a17c5020aace4f4a030634eec0ca15943b690f276e41\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:08:37.037587 kubelet[2839]: E0129 12:08:37.037349 2839 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e41fb6285a20a7a684a17c5020aace4f4a030634eec0ca15943b690f276e41\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:08:37.037587 kubelet[2839]: E0129 12:08:37.037423 2839 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e41fb6285a20a7a684a17c5020aace4f4a030634eec0ca15943b690f276e41\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-72vd5" Jan 29 12:08:37.037587 kubelet[2839]: E0129 12:08:37.037448 2839 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e41fb6285a20a7a684a17c5020aace4f4a030634eec0ca15943b690f276e41\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-72vd5" Jan 29 12:08:37.037587 kubelet[2839]: E0129 12:08:37.037506 2839 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-72vd5_kube-system(cfbadf87-a842-4ecc-bd80-3e30827e7355)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-72vd5_kube-system(cfbadf87-a842-4ecc-bd80-3e30827e7355)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96e41fb6285a20a7a684a17c5020aace4f4a030634eec0ca15943b690f276e41\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-72vd5" podUID="cfbadf87-a842-4ecc-bd80-3e30827e7355" Jan 29 12:08:37.836038 kubelet[2839]: I0129 12:08:37.835932 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-48h8r" podStartSLOduration=2.487404036 podStartE2EDuration="9.835504352s" podCreationTimestamp="2025-01-29 12:08:28 +0000 UTC" firstStartedPulling="2025-01-29 12:08:28.837608639 +0000 UTC m=+16.320296391" lastFinishedPulling="2025-01-29 12:08:36.185708905 +0000 UTC m=+23.668396707" observedRunningTime="2025-01-29 12:08:37.834465984 +0000 UTC m=+25.317153807" watchObservedRunningTime="2025-01-29 12:08:37.835504352 +0000 UTC m=+25.318192174" Jan 29 12:08:38.039119 systemd-networkd[1206]: flannel.1: Link UP Jan 29 12:08:38.039137 systemd-networkd[1206]: flannel.1: Gained carrier Jan 29 12:08:39.224204 systemd-networkd[1206]: flannel.1: Gained IPv6LL Jan 29 12:08:47.687240 containerd[1609]: time="2025-01-29T12:08:47.686896705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-72vd5,Uid:cfbadf87-a842-4ecc-bd80-3e30827e7355,Namespace:kube-system,Attempt:0,}" Jan 29 12:08:47.736717 systemd-networkd[1206]: cni0: Link UP Jan 29 12:08:47.736737 systemd-networkd[1206]: cni0: Gained carrier Jan 29 12:08:47.743635 systemd-networkd[1206]: cni0: Lost carrier Jan 29 12:08:47.756540 systemd-networkd[1206]: vetha986f6db: Link UP Jan 29 12:08:47.761293 kernel: cni0: port 1(vetha986f6db) entered blocking state Jan 29 12:08:47.761481 kernel: cni0: port 1(vetha986f6db) entered disabled state Jan 29 12:08:47.761979 kernel: vetha986f6db: entered allmulticast mode Jan 29 12:08:47.768962 kernel: vetha986f6db: entered promiscuous mode Jan 29 12:08:47.775452 kernel: cni0: port 1(vetha986f6db) entered blocking state Jan 29 12:08:47.776515 kernel: cni0: port 1(vetha986f6db) entered forwarding state Jan 29 12:08:47.776552 kernel: cni0: port 1(vetha986f6db) entered disabled state Jan 29 12:08:47.783251 kernel: cni0: port 1(vetha986f6db) entered blocking state Jan 29 12:08:47.783344 kernel: cni0: port 1(vetha986f6db) entered forwarding state Jan 29 12:08:47.783820 systemd-networkd[1206]: vetha986f6db: Gained carrier Jan 29 12:08:47.786138 systemd-networkd[1206]: cni0: Gained carrier Jan 29 12:08:47.788861 containerd[1609]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 29 12:08:47.788861 containerd[1609]: delegateAdd: netconf sent to delegate plugin: Jan 29 12:08:47.811227 containerd[1609]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T12:08:47.811130331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:08:47.811227 containerd[1609]: time="2025-01-29T12:08:47.811195063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:08:47.811483 containerd[1609]: time="2025-01-29T12:08:47.811425816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:47.812189 containerd[1609]: time="2025-01-29T12:08:47.812134855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:47.873986 containerd[1609]: time="2025-01-29T12:08:47.873943856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-72vd5,Uid:cfbadf87-a842-4ecc-bd80-3e30827e7355,Namespace:kube-system,Attempt:0,} returns sandbox id \"61f10e069e41fbcefbea4cf549ac7780e31db30d3fbc9d853813f05e9f378d88\"" Jan 29 12:08:47.879072 containerd[1609]: time="2025-01-29T12:08:47.879038663Z" level=info msg="CreateContainer within sandbox \"61f10e069e41fbcefbea4cf549ac7780e31db30d3fbc9d853813f05e9f378d88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:08:47.902107 containerd[1609]: time="2025-01-29T12:08:47.902046572Z" level=info msg="CreateContainer within sandbox \"61f10e069e41fbcefbea4cf549ac7780e31db30d3fbc9d853813f05e9f378d88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f2699b46d96001917beb57a1b553add94581d4078e22f3ed6eb4c1bbd98ca3dc\"" Jan 29 12:08:47.903878 containerd[1609]: time="2025-01-29T12:08:47.903421751Z" level=info msg="StartContainer for \"f2699b46d96001917beb57a1b553add94581d4078e22f3ed6eb4c1bbd98ca3dc\"" Jan 29 12:08:47.962439 containerd[1609]: time="2025-01-29T12:08:47.962083468Z" level=info msg="StartContainer for \"f2699b46d96001917beb57a1b553add94581d4078e22f3ed6eb4c1bbd98ca3dc\" returns successfully" Jan 29 12:08:48.952132 systemd-networkd[1206]: vetha986f6db: Gained IPv6LL Jan 29 12:08:49.208125 systemd-networkd[1206]: cni0: Gained IPv6LL Jan 29 12:08:49.278888 kubelet[2839]: I0129 12:08:49.275460 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-72vd5" podStartSLOduration=21.275426267 podStartE2EDuration="21.275426267s" podCreationTimestamp="2025-01-29 12:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:08:48.943926955 +0000 UTC m=+36.426614757" watchObservedRunningTime="2025-01-29 12:08:49.275426267 +0000 UTC m=+36.758114079" Jan 29 12:08:51.689587 containerd[1609]: time="2025-01-29T12:08:51.687112062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tc4lh,Uid:0a334140-b150-4d71-8c2d-908b2b66c709,Namespace:kube-system,Attempt:0,}" Jan 29 12:08:51.755077 kernel: cni0: port 2(veth64e9e36a) entered blocking state Jan 29 12:08:51.755219 kernel: cni0: port 2(veth64e9e36a) entered disabled state Jan 29 12:08:51.751563 systemd-networkd[1206]: veth64e9e36a: Link UP Jan 29 12:08:51.758898 kernel: veth64e9e36a: entered allmulticast mode Jan 29 12:08:51.761931 kernel: veth64e9e36a: entered promiscuous mode Jan 29 12:08:51.774746 kernel: cni0: port 2(veth64e9e36a) entered blocking state Jan 29 12:08:51.774895 kernel: cni0: port 2(veth64e9e36a) entered forwarding state Jan 29 12:08:51.774416 systemd-networkd[1206]: veth64e9e36a: Gained carrier Jan 29 12:08:51.782614 containerd[1609]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009c8e8), "name":"cbr0", "type":"bridge"} Jan 29 12:08:51.782614 containerd[1609]: delegateAdd: netconf sent to delegate plugin: Jan 29 12:08:51.802183 containerd[1609]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T12:08:51.802087294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:08:51.802337 containerd[1609]: time="2025-01-29T12:08:51.802221145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:08:51.802337 containerd[1609]: time="2025-01-29T12:08:51.802284454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:51.803385 containerd[1609]: time="2025-01-29T12:08:51.802504006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:08:51.825845 systemd[1]: run-containerd-runc-k8s.io-78a054a5654fd475433c61aa517e19d6e0a31838e0165dba6db994c5aa5f44b6-runc.UhNgNf.mount: Deactivated successfully. Jan 29 12:08:51.864811 containerd[1609]: time="2025-01-29T12:08:51.864760403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tc4lh,Uid:0a334140-b150-4d71-8c2d-908b2b66c709,Namespace:kube-system,Attempt:0,} returns sandbox id \"78a054a5654fd475433c61aa517e19d6e0a31838e0165dba6db994c5aa5f44b6\"" Jan 29 12:08:51.868593 containerd[1609]: time="2025-01-29T12:08:51.868551843Z" level=info msg="CreateContainer within sandbox \"78a054a5654fd475433c61aa517e19d6e0a31838e0165dba6db994c5aa5f44b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:08:51.889634 containerd[1609]: time="2025-01-29T12:08:51.889583904Z" level=info msg="CreateContainer within sandbox \"78a054a5654fd475433c61aa517e19d6e0a31838e0165dba6db994c5aa5f44b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e7c3bcf204d38e5b84bc12ec645a7eb6959d50b706bf21f8dc82a89b8b9fb8a0\"" Jan 29 12:08:51.890367 containerd[1609]: time="2025-01-29T12:08:51.890336796Z" level=info msg="StartContainer for \"e7c3bcf204d38e5b84bc12ec645a7eb6959d50b706bf21f8dc82a89b8b9fb8a0\"" Jan 29 12:08:51.947292 containerd[1609]: time="2025-01-29T12:08:51.946398453Z" level=info msg="StartContainer for \"e7c3bcf204d38e5b84bc12ec645a7eb6959d50b706bf21f8dc82a89b8b9fb8a0\" returns successfully" Jan 29 12:08:52.894901 kubelet[2839]: I0129 12:08:52.894728 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tc4lh" podStartSLOduration=24.894692038 podStartE2EDuration="24.894692038s" podCreationTimestamp="2025-01-29 12:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:08:52.893406517 +0000 UTC m=+40.376094319" watchObservedRunningTime="2025-01-29 12:08:52.894692038 +0000 UTC m=+40.377379840" Jan 29 12:08:53.240074 systemd-networkd[1206]: veth64e9e36a: Gained IPv6LL Jan 29 12:09:43.416552 systemd[1]: Started sshd@5-172.24.4.127:22-172.24.4.1:42522.service - OpenSSH per-connection server daemon (172.24.4.1:42522). Jan 29 12:09:44.824499 sshd[3940]: Accepted publickey for core from 172.24.4.1 port 42522 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:09:44.827294 sshd-session[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:09:44.841057 systemd-logind[1587]: New session 8 of user core. Jan 29 12:09:44.848355 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:09:45.563513 sshd[3958]: Connection closed by 172.24.4.1 port 42522 Jan 29 12:09:45.564403 sshd-session[3940]: pam_unix(sshd:session): session closed for user core Jan 29 12:09:45.568821 systemd[1]: sshd@5-172.24.4.127:22-172.24.4.1:42522.service: Deactivated successfully. Jan 29 12:09:45.574300 systemd-logind[1587]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:09:45.575438 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:09:45.578125 systemd-logind[1587]: Removed session 8. Jan 29 12:09:50.575896 systemd[1]: Started sshd@6-172.24.4.127:22-172.24.4.1:34570.service - OpenSSH per-connection server daemon (172.24.4.1:34570). Jan 29 12:09:51.982591 sshd[3991]: Accepted publickey for core from 172.24.4.1 port 34570 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:09:51.984533 sshd-session[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:09:51.995268 systemd-logind[1587]: New session 9 of user core. Jan 29 12:09:52.012422 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:09:52.678109 sshd[3994]: Connection closed by 172.24.4.1 port 34570 Jan 29 12:09:52.678804 sshd-session[3991]: pam_unix(sshd:session): session closed for user core Jan 29 12:09:52.685465 systemd[1]: sshd@6-172.24.4.127:22-172.24.4.1:34570.service: Deactivated successfully. Jan 29 12:09:52.693131 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:09:52.694111 systemd-logind[1587]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:09:52.700217 systemd-logind[1587]: Removed session 9. Jan 29 12:09:57.690317 systemd[1]: Started sshd@7-172.24.4.127:22-172.24.4.1:52712.service - OpenSSH per-connection server daemon (172.24.4.1:52712). Jan 29 12:09:58.795171 sshd[4027]: Accepted publickey for core from 172.24.4.1 port 52712 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:09:58.798645 sshd-session[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:09:58.818619 systemd-logind[1587]: New session 10 of user core. Jan 29 12:09:58.829280 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:09:59.542309 sshd[4053]: Connection closed by 172.24.4.1 port 52712 Jan 29 12:09:59.546131 sshd-session[4027]: pam_unix(sshd:session): session closed for user core Jan 29 12:09:59.561092 systemd[1]: Started sshd@8-172.24.4.127:22-172.24.4.1:52720.service - OpenSSH per-connection server daemon (172.24.4.1:52720). Jan 29 12:09:59.567261 systemd[1]: sshd@7-172.24.4.127:22-172.24.4.1:52712.service: Deactivated successfully. Jan 29 12:09:59.583218 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:09:59.586435 systemd-logind[1587]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:09:59.590066 systemd-logind[1587]: Removed session 10. Jan 29 12:10:00.910057 sshd[4061]: Accepted publickey for core from 172.24.4.1 port 52720 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:10:00.912940 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:10:00.925291 systemd-logind[1587]: New session 11 of user core. Jan 29 12:10:00.931937 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:10:01.791825 sshd[4067]: Connection closed by 172.24.4.1 port 52720 Jan 29 12:10:01.791478 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Jan 29 12:10:01.807219 systemd[1]: Started sshd@9-172.24.4.127:22-172.24.4.1:52722.service - OpenSSH per-connection server daemon (172.24.4.1:52722). Jan 29 12:10:01.809607 systemd[1]: sshd@8-172.24.4.127:22-172.24.4.1:52720.service: Deactivated successfully. Jan 29 12:10:01.820146 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:10:01.823119 systemd-logind[1587]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:10:01.828486 systemd-logind[1587]: Removed session 11. Jan 29 12:10:02.999001 sshd[4073]: Accepted publickey for core from 172.24.4.1 port 52722 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:10:03.002595 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:10:03.012982 systemd-logind[1587]: New session 12 of user core. Jan 29 12:10:03.022533 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:10:03.788425 sshd[4079]: Connection closed by 172.24.4.1 port 52722 Jan 29 12:10:03.789472 sshd-session[4073]: pam_unix(sshd:session): session closed for user core Jan 29 12:10:03.794729 systemd[1]: sshd@9-172.24.4.127:22-172.24.4.1:52722.service: Deactivated successfully. Jan 29 12:10:03.802708 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:10:03.805929 systemd-logind[1587]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:10:03.808824 systemd-logind[1587]: Removed session 12. Jan 29 12:10:08.802512 systemd[1]: Started sshd@10-172.24.4.127:22-172.24.4.1:38172.service - OpenSSH per-connection server daemon (172.24.4.1:38172). Jan 29 12:10:09.997967 sshd[4117]: Accepted publickey for core from 172.24.4.1 port 38172 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:10:10.000935 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:10:10.011695 systemd-logind[1587]: New session 13 of user core. Jan 29 12:10:10.019015 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:10:10.735318 sshd[4136]: Connection closed by 172.24.4.1 port 38172 Jan 29 12:10:10.737059 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Jan 29 12:10:10.746629 systemd[1]: Started sshd@11-172.24.4.127:22-172.24.4.1:38188.service - OpenSSH per-connection server daemon (172.24.4.1:38188). Jan 29 12:10:10.747748 systemd[1]: sshd@10-172.24.4.127:22-172.24.4.1:38172.service: Deactivated successfully. Jan 29 12:10:10.756370 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:10:10.758572 systemd-logind[1587]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:10:10.763586 systemd-logind[1587]: Removed session 13. Jan 29 12:10:12.039814 sshd[4143]: Accepted publickey for core from 172.24.4.1 port 38188 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:10:12.041611 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:10:12.050738 systemd-logind[1587]: New session 14 of user core. Jan 29 12:10:12.058539 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:10:12.785488 sshd[4149]: Connection closed by 172.24.4.1 port 38188 Jan 29 12:10:12.786333 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Jan 29 12:10:12.798441 systemd[1]: Started sshd@12-172.24.4.127:22-172.24.4.1:38204.service - OpenSSH per-connection server daemon (172.24.4.1:38204). Jan 29 12:10:12.799540 systemd[1]: sshd@11-172.24.4.127:22-172.24.4.1:38188.service: Deactivated successfully. Jan 29 12:10:12.811987 systemd-logind[1587]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:10:12.814293 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:10:12.819296 systemd-logind[1587]: Removed session 14. Jan 29 12:10:14.097054 sshd[4157]: Accepted publickey for core from 172.24.4.1 port 38204 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:10:14.100482 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:10:14.115535 systemd-logind[1587]: New session 15 of user core. Jan 29 12:10:14.123145 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:10:16.829929 sshd[4184]: Connection closed by 172.24.4.1 port 38204 Jan 29 12:10:16.830407 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Jan 29 12:10:16.839345 systemd[1]: Started sshd@13-172.24.4.127:22-172.24.4.1:51888.service - OpenSSH per-connection server daemon (172.24.4.1:51888). Jan 29 12:10:16.841600 systemd[1]: sshd@12-172.24.4.127:22-172.24.4.1:38204.service: Deactivated successfully. Jan 29 12:10:16.844673 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:10:16.846716 systemd-logind[1587]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:10:16.852130 systemd-logind[1587]: Removed session 15. Jan 29 12:10:18.080535 sshd[4198]: Accepted publickey for core from 172.24.4.1 port 51888 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:10:18.082788 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:10:18.096330 systemd-logind[1587]: New session 16 of user core. Jan 29 12:10:18.103720 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:10:19.079885 sshd[4203]: Connection closed by 172.24.4.1 port 51888 Jan 29 12:10:19.079751 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Jan 29 12:10:19.085673 systemd[1]: sshd@13-172.24.4.127:22-172.24.4.1:51888.service: Deactivated successfully. Jan 29 12:10:19.091459 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:10:19.095675 systemd-logind[1587]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:10:19.104458 systemd[1]: Started sshd@14-172.24.4.127:22-172.24.4.1:51900.service - OpenSSH per-connection server daemon (172.24.4.1:51900). Jan 29 12:10:19.106690 systemd-logind[1587]: Removed session 16. Jan 29 12:10:20.570039 sshd[4233]: Accepted publickey for core from 172.24.4.1 port 51900 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:10:20.572572 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:10:20.586735 systemd-logind[1587]: New session 17 of user core. Jan 29 12:10:20.596637 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:10:21.309909 sshd[4236]: Connection closed by 172.24.4.1 port 51900 Jan 29 12:10:21.310796 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Jan 29 12:10:21.318126 systemd[1]: sshd@14-172.24.4.127:22-172.24.4.1:51900.service: Deactivated successfully. Jan 29 12:10:21.319020 systemd-logind[1587]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:10:21.325317 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:10:21.327884 systemd-logind[1587]: Removed session 17. Jan 29 12:10:26.325611 systemd[1]: Started sshd@15-172.24.4.127:22-172.24.4.1:41204.service - OpenSSH per-connection server daemon (172.24.4.1:41204). Jan 29 12:10:27.533206 sshd[4271]: Accepted publickey for core from 172.24.4.1 port 41204 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:10:27.535682 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:10:27.541193 systemd-logind[1587]: New session 18 of user core. Jan 29 12:10:27.547445 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:10:28.310117 sshd[4274]: Connection closed by 172.24.4.1 port 41204 Jan 29 12:10:28.310361 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Jan 29 12:10:28.319453 systemd-logind[1587]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:10:28.319736 systemd[1]: sshd@15-172.24.4.127:22-172.24.4.1:41204.service: Deactivated successfully. Jan 29 12:10:28.327781 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:10:28.330967 systemd-logind[1587]: Removed session 18. Jan 29 12:10:33.329322 systemd[1]: Started sshd@16-172.24.4.127:22-172.24.4.1:41210.service - OpenSSH per-connection server daemon (172.24.4.1:41210). Jan 29 12:10:34.608237 sshd[4307]: Accepted publickey for core from 172.24.4.1 port 41210 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:10:34.610760 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:10:34.619485 systemd-logind[1587]: New session 19 of user core. Jan 29 12:10:34.628685 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:10:35.428189 sshd[4331]: Connection closed by 172.24.4.1 port 41210 Jan 29 12:10:35.429240 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Jan 29 12:10:35.437008 systemd[1]: sshd@16-172.24.4.127:22-172.24.4.1:41210.service: Deactivated successfully. Jan 29 12:10:35.443125 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:10:35.443636 systemd-logind[1587]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:10:35.448137 systemd-logind[1587]: Removed session 19. Jan 29 12:10:40.440470 systemd[1]: Started sshd@17-172.24.4.127:22-172.24.4.1:36102.service - OpenSSH per-connection server daemon (172.24.4.1:36102). Jan 29 12:10:41.736138 sshd[4363]: Accepted publickey for core from 172.24.4.1 port 36102 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:10:41.738637 sshd-session[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:10:41.749050 systemd-logind[1587]: New session 20 of user core. Jan 29 12:10:41.760469 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:10:42.473537 sshd[4366]: Connection closed by 172.24.4.1 port 36102 Jan 29 12:10:42.474983 sshd-session[4363]: pam_unix(sshd:session): session closed for user core Jan 29 12:10:42.483737 systemd-logind[1587]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:10:42.484030 systemd[1]: sshd@17-172.24.4.127:22-172.24.4.1:36102.service: Deactivated successfully. Jan 29 12:10:42.492966 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:10:42.496697 systemd-logind[1587]: Removed session 20.