Dec 13 13:41:39.073353 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:41:39.073378 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:41:39.073391 kernel: BIOS-provided physical RAM map: Dec 13 13:41:39.073399 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 13:41:39.073406 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 13:41:39.073414 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 13:41:39.073422 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 13:41:39.073430 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 13:41:39.073438 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:41:39.073446 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 13:41:39.073456 kernel: NX (Execute Disable) protection: active Dec 13 13:41:39.073464 kernel: APIC: Static calls initialized Dec 13 13:41:39.073472 kernel: SMBIOS 2.8 present. Dec 13 13:41:39.073480 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 13:41:39.073489 kernel: Hypervisor detected: KVM Dec 13 13:41:39.073499 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:41:39.073508 kernel: kvm-clock: using sched offset of 5255680135 cycles Dec 13 13:41:39.073517 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:41:39.073525 kernel: tsc: Detected 1996.249 MHz processor Dec 13 13:41:39.073534 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:41:39.073543 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:41:39.073551 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 13:41:39.073560 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 13:41:39.073568 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:41:39.073579 kernel: ACPI: Early table checksum verification disabled Dec 13 13:41:39.073587 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 13:41:39.073596 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:41:39.073604 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:41:39.073613 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:41:39.073621 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 13:41:39.073629 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:41:39.073638 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:41:39.073646 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 13:41:39.073657 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 13:41:39.073665 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 13:41:39.073673 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 13:41:39.073681 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 13:41:39.073690 kernel: No NUMA configuration found Dec 13 13:41:39.073698 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 13:41:39.073706 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 13:41:39.073718 kernel: Zone ranges: Dec 13 13:41:39.073728 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:41:39.073736 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 13:41:39.073745 kernel: Normal empty Dec 13 13:41:39.073754 kernel: Movable zone start for each node Dec 13 13:41:39.073762 kernel: Early memory node ranges Dec 13 13:41:39.073771 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 13:41:39.073781 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 13:41:39.073790 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 13:41:39.073799 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:41:39.073807 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 13:41:39.073816 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 13:41:39.073825 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 13:41:39.073833 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:41:39.073842 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:41:39.073851 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 13:41:39.073860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:41:39.073870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:41:39.073879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:41:39.073888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:41:39.073897 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:41:39.073905 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 13:41:39.073914 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 13:41:39.073923 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 13:41:39.073931 kernel: Booting paravirtualized kernel on KVM Dec 13 13:41:39.073940 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:41:39.073952 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 13:41:39.073961 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 13:41:39.073970 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 13:41:39.073979 kernel: pcpu-alloc: [0] 0 1 Dec 13 13:41:39.073987 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 13:41:39.073997 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:41:39.074007 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:41:39.074015 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:41:39.074026 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:41:39.074035 kernel: Fallback order for Node 0: 0 Dec 13 13:41:39.074044 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 13:41:39.074052 kernel: Policy zone: DMA32 Dec 13 13:41:39.074061 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:41:39.074070 kernel: Memory: 1969164K/2096620K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 127196K reserved, 0K cma-reserved) Dec 13 13:41:39.074079 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:41:39.074088 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:41:39.074098 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:41:39.074107 kernel: Dynamic Preempt: voluntary Dec 13 13:41:39.074115 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:41:39.074125 kernel: rcu: RCU event tracing is enabled. Dec 13 13:41:39.074134 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:41:39.074143 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:41:39.074152 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:41:39.074160 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:41:39.074169 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:41:39.074178 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:41:39.074189 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 13:41:39.074213 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:41:39.074222 kernel: Console: colour VGA+ 80x25 Dec 13 13:41:39.074231 kernel: printk: console [tty0] enabled Dec 13 13:41:39.074255 kernel: printk: console [ttyS0] enabled Dec 13 13:41:39.074264 kernel: ACPI: Core revision 20230628 Dec 13 13:41:39.074273 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:41:39.074281 kernel: x2apic enabled Dec 13 13:41:39.074290 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:41:39.074302 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 13:41:39.074311 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 13:41:39.074320 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 13:41:39.074329 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 13:41:39.074337 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 13:41:39.074346 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:41:39.074355 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:41:39.074364 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:41:39.074373 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:41:39.074384 kernel: Speculative Store Bypass: Vulnerable Dec 13 13:41:39.074392 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 13:41:39.074401 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:41:39.074410 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:41:39.074419 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:41:39.074427 kernel: landlock: Up and running. Dec 13 13:41:39.074436 kernel: SELinux: Initializing. Dec 13 13:41:39.074445 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:41:39.074461 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:41:39.074470 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 13:41:39.074480 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:41:39.074489 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:41:39.074500 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:41:39.074509 kernel: Performance Events: AMD PMU driver. Dec 13 13:41:39.074518 kernel: ... version: 0 Dec 13 13:41:39.074527 kernel: ... bit width: 48 Dec 13 13:41:39.074538 kernel: ... generic registers: 4 Dec 13 13:41:39.074547 kernel: ... value mask: 0000ffffffffffff Dec 13 13:41:39.074557 kernel: ... max period: 00007fffffffffff Dec 13 13:41:39.074566 kernel: ... fixed-purpose events: 0 Dec 13 13:41:39.074575 kernel: ... event mask: 000000000000000f Dec 13 13:41:39.074584 kernel: signal: max sigframe size: 1440 Dec 13 13:41:39.074594 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:41:39.074603 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:41:39.074612 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:41:39.074621 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:41:39.074632 kernel: .... node #0, CPUs: #1 Dec 13 13:41:39.074641 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:41:39.074650 kernel: smpboot: Max logical packages: 2 Dec 13 13:41:39.074659 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 13:41:39.074669 kernel: devtmpfs: initialized Dec 13 13:41:39.074678 kernel: x86/mm: Memory block size: 128MB Dec 13 13:41:39.074687 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:41:39.074696 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:41:39.074705 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:41:39.074716 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:41:39.074725 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:41:39.074735 kernel: audit: type=2000 audit(1734097298.874:1): state=initialized audit_enabled=0 res=1 Dec 13 13:41:39.074744 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:41:39.074753 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:41:39.074762 kernel: cpuidle: using governor menu Dec 13 13:41:39.074771 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:41:39.074780 kernel: dca service started, version 1.12.1 Dec 13 13:41:39.074789 kernel: PCI: Using configuration type 1 for base access Dec 13 13:41:39.074801 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:41:39.074810 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:41:39.074819 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:41:39.074828 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:41:39.074837 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:41:39.074846 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:41:39.074855 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:41:39.074864 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:41:39.074874 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:41:39.074885 kernel: ACPI: Interpreter enabled Dec 13 13:41:39.074894 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 13:41:39.074903 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:41:39.074912 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:41:39.074921 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:41:39.074930 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 13:41:39.074939 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:41:39.075076 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:41:39.075180 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 13:41:39.075303 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 13:41:39.075318 kernel: acpiphp: Slot [3] registered Dec 13 13:41:39.075328 kernel: acpiphp: Slot [4] registered Dec 13 13:41:39.075337 kernel: acpiphp: Slot [5] registered Dec 13 13:41:39.075346 kernel: acpiphp: Slot [6] registered Dec 13 13:41:39.075355 kernel: acpiphp: Slot [7] registered Dec 13 13:41:39.075364 kernel: acpiphp: Slot [8] registered Dec 13 13:41:39.075376 kernel: acpiphp: Slot [9] registered Dec 13 13:41:39.075385 kernel: acpiphp: Slot [10] registered Dec 13 13:41:39.075394 kernel: acpiphp: Slot [11] registered Dec 13 13:41:39.075403 kernel: acpiphp: Slot [12] registered Dec 13 13:41:39.075412 kernel: acpiphp: Slot [13] registered Dec 13 13:41:39.075421 kernel: acpiphp: Slot [14] registered Dec 13 13:41:39.075430 kernel: acpiphp: Slot [15] registered Dec 13 13:41:39.075439 kernel: acpiphp: Slot [16] registered Dec 13 13:41:39.075449 kernel: acpiphp: Slot [17] registered Dec 13 13:41:39.075458 kernel: acpiphp: Slot [18] registered Dec 13 13:41:39.075469 kernel: acpiphp: Slot [19] registered Dec 13 13:41:39.075478 kernel: acpiphp: Slot [20] registered Dec 13 13:41:39.075487 kernel: acpiphp: Slot [21] registered Dec 13 13:41:39.075495 kernel: acpiphp: Slot [22] registered Dec 13 13:41:39.075504 kernel: acpiphp: Slot [23] registered Dec 13 13:41:39.075513 kernel: acpiphp: Slot [24] registered Dec 13 13:41:39.075522 kernel: acpiphp: Slot [25] registered Dec 13 13:41:39.075531 kernel: acpiphp: Slot [26] registered Dec 13 13:41:39.075540 kernel: acpiphp: Slot [27] registered Dec 13 13:41:39.075551 kernel: acpiphp: Slot [28] registered Dec 13 13:41:39.075560 kernel: acpiphp: Slot [29] registered Dec 13 13:41:39.075568 kernel: acpiphp: Slot [30] registered Dec 13 13:41:39.075577 kernel: acpiphp: Slot [31] registered Dec 13 13:41:39.075587 kernel: PCI host bridge to bus 0000:00 Dec 13 13:41:39.075680 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:41:39.075763 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:41:39.075845 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:41:39.075929 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 13:41:39.076010 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 13:41:39.076089 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:41:39.078268 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 13:41:39.078396 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 13:41:39.078506 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 13:41:39.078616 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 13:41:39.078715 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 13:41:39.078812 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 13:41:39.078910 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 13:41:39.079008 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 13:41:39.079115 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 13:41:39.079237 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 13:41:39.079349 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 13:41:39.079463 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 13:41:39.079563 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 13:41:39.079662 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 13:41:39.079762 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 13:41:39.079861 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 13:41:39.079959 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:41:39.080075 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:41:39.080175 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 13:41:39.080307 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 13:41:39.080409 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 13:41:39.080523 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 13:41:39.080631 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 13:41:39.080732 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 13:41:39.080836 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 13:41:39.080934 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 13:41:39.081042 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 13:41:39.081141 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 13:41:39.081273 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 13:41:39.081392 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:41:39.081492 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 13:41:39.081593 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 13:41:39.081608 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:41:39.081619 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:41:39.081629 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:41:39.081640 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:41:39.081650 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 13:41:39.081660 kernel: iommu: Default domain type: Translated Dec 13 13:41:39.081670 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:41:39.081684 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:41:39.081694 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:41:39.081704 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 13:41:39.081715 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 13:41:39.081808 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 13:41:39.081905 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 13:41:39.082003 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:41:39.082017 kernel: vgaarb: loaded Dec 13 13:41:39.082028 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:41:39.082043 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:41:39.082053 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:41:39.082063 kernel: pnp: PnP ACPI init Dec 13 13:41:39.082163 kernel: pnp 00:03: [dma 2] Dec 13 13:41:39.082180 kernel: pnp: PnP ACPI: found 5 devices Dec 13 13:41:39.082191 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:41:39.084240 kernel: NET: Registered PF_INET protocol family Dec 13 13:41:39.084251 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:41:39.084266 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 13:41:39.084277 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:41:39.084288 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:41:39.084299 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 13:41:39.084310 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 13:41:39.084320 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:41:39.084331 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:41:39.084341 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:41:39.084352 kernel: NET: Registered PF_XDP protocol family Dec 13 13:41:39.084457 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:41:39.084564 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:41:39.084652 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:41:39.084736 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 13:41:39.084823 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 13:41:39.084925 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 13:41:39.085026 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 13:41:39.085042 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:41:39.085057 kernel: Initialise system trusted keyrings Dec 13 13:41:39.085068 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 13:41:39.085078 kernel: Key type asymmetric registered Dec 13 13:41:39.085088 kernel: Asymmetric key parser 'x509' registered Dec 13 13:41:39.085098 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:41:39.085109 kernel: io scheduler mq-deadline registered Dec 13 13:41:39.085119 kernel: io scheduler kyber registered Dec 13 13:41:39.085129 kernel: io scheduler bfq registered Dec 13 13:41:39.085140 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:41:39.085153 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 13:41:39.085164 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 13:41:39.085174 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 13:41:39.085185 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 13:41:39.085214 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:41:39.085225 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:41:39.085235 kernel: random: crng init done Dec 13 13:41:39.085246 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:41:39.085256 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:41:39.085270 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:41:39.085374 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 13:41:39.085392 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:41:39.085482 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 13:41:39.085581 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T13:41:38 UTC (1734097298) Dec 13 13:41:39.085672 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 13:41:39.085687 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 13:41:39.085698 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:41:39.085712 kernel: Segment Routing with IPv6 Dec 13 13:41:39.085723 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:41:39.085733 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:41:39.085743 kernel: Key type dns_resolver registered Dec 13 13:41:39.085753 kernel: IPI shorthand broadcast: enabled Dec 13 13:41:39.085764 kernel: sched_clock: Marking stable (1010008440, 138429608)->(1156622515, -8184467) Dec 13 13:41:39.085774 kernel: registered taskstats version 1 Dec 13 13:41:39.085784 kernel: Loading compiled-in X.509 certificates Dec 13 13:41:39.085795 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:41:39.085807 kernel: Key type .fscrypt registered Dec 13 13:41:39.085818 kernel: Key type fscrypt-provisioning registered Dec 13 13:41:39.085828 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:41:39.085839 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:41:39.085849 kernel: ima: No architecture policies found Dec 13 13:41:39.085859 kernel: clk: Disabling unused clocks Dec 13 13:41:39.085870 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:41:39.085880 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:41:39.085893 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:41:39.085904 kernel: Run /init as init process Dec 13 13:41:39.085914 kernel: with arguments: Dec 13 13:41:39.085924 kernel: /init Dec 13 13:41:39.085934 kernel: with environment: Dec 13 13:41:39.085944 kernel: HOME=/ Dec 13 13:41:39.085954 kernel: TERM=linux Dec 13 13:41:39.085964 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:41:39.085978 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:41:39.085993 systemd[1]: Detected virtualization kvm. Dec 13 13:41:39.086005 systemd[1]: Detected architecture x86-64. Dec 13 13:41:39.086016 systemd[1]: Running in initrd. Dec 13 13:41:39.086027 systemd[1]: No hostname configured, using default hostname. Dec 13 13:41:39.086038 systemd[1]: Hostname set to . Dec 13 13:41:39.086050 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:41:39.086061 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:41:39.086074 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:41:39.086086 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:41:39.086098 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:41:39.086109 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:41:39.086120 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:41:39.086132 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:41:39.086145 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:41:39.086158 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:41:39.086170 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:41:39.086181 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:41:39.086209 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:41:39.086237 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:41:39.086251 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:41:39.086266 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:41:39.086277 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:41:39.086289 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:41:39.086300 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:41:39.086312 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:41:39.086323 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:41:39.086335 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:41:39.086347 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:41:39.086361 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:41:39.086372 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:41:39.086384 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:41:39.086396 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:41:39.086407 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:41:39.086419 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:41:39.086430 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:41:39.086465 systemd-journald[185]: Collecting audit messages is disabled. Dec 13 13:41:39.086497 systemd-journald[185]: Journal started Dec 13 13:41:39.086527 systemd-journald[185]: Runtime Journal (/run/log/journal/ed4e303f863144a987fa7cc5ce677436) is 4.9M, max 39.3M, 34.4M free. Dec 13 13:41:39.096220 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:41:39.105255 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:41:39.107507 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 13:41:39.111758 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:41:39.115115 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:41:39.117489 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:41:39.130693 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:41:39.177355 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:41:39.177383 kernel: Bridge firewalling registered Dec 13 13:41:39.143583 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 13:41:39.186350 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:41:39.187163 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:41:39.189366 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:41:39.191962 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:41:39.198358 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:41:39.205335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:41:39.209515 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:41:39.210982 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:41:39.213511 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:41:39.223824 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:41:39.225230 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:41:39.228711 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:41:39.232761 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:41:39.255315 dracut-cmdline[221]: dracut-dracut-053 Dec 13 13:41:39.260901 systemd-resolved[214]: Positive Trust Anchors: Dec 13 13:41:39.262424 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:41:39.263150 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:41:39.273493 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:41:39.272089 systemd-resolved[214]: Defaulting to hostname 'linux'. Dec 13 13:41:39.273133 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:41:39.273968 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:41:39.338232 kernel: SCSI subsystem initialized Dec 13 13:41:39.348266 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:41:39.360322 kernel: iscsi: registered transport (tcp) Dec 13 13:41:39.382557 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:41:39.382631 kernel: QLogic iSCSI HBA Driver Dec 13 13:41:39.442867 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:41:39.450339 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:41:39.513474 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:41:39.513584 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:41:39.516574 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:41:39.588320 kernel: raid6: sse2x4 gen() 5207 MB/s Dec 13 13:41:39.606278 kernel: raid6: sse2x2 gen() 6312 MB/s Dec 13 13:41:39.623391 kernel: raid6: sse2x1 gen() 10154 MB/s Dec 13 13:41:39.623462 kernel: raid6: using algorithm sse2x1 gen() 10154 MB/s Dec 13 13:41:39.641591 kernel: raid6: .... xor() 7360 MB/s, rmw enabled Dec 13 13:41:39.641672 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 13:41:39.664520 kernel: xor: measuring software checksum speed Dec 13 13:41:39.664593 kernel: prefetch64-sse : 18502 MB/sec Dec 13 13:41:39.665527 kernel: generic_sse : 16815 MB/sec Dec 13 13:41:39.665602 kernel: xor: using function: prefetch64-sse (18502 MB/sec) Dec 13 13:41:39.838255 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:41:39.856086 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:41:39.865536 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:41:39.903987 systemd-udevd[404]: Using default interface naming scheme 'v255'. Dec 13 13:41:39.914932 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:41:39.925438 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:41:39.953684 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Dec 13 13:41:39.994844 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:41:40.000395 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:41:40.047909 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:41:40.058514 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:41:40.087190 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:41:40.092516 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:41:40.096172 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:41:40.099853 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:41:40.109557 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:41:40.129254 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:41:40.155224 kernel: libata version 3.00 loaded. Dec 13 13:41:40.159372 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 13:41:40.175566 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Dec 13 13:41:40.184938 kernel: scsi host0: ata_piix Dec 13 13:41:40.185095 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 13:41:40.185218 kernel: scsi host1: ata_piix Dec 13 13:41:40.185333 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:41:40.185348 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 13:41:40.185360 kernel: GPT:17805311 != 41943039 Dec 13 13:41:40.185371 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 13:41:40.185383 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:41:40.185394 kernel: GPT:17805311 != 41943039 Dec 13 13:41:40.185409 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:41:40.185421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:41:40.170154 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:41:40.170299 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:41:40.170889 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:41:40.171514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:41:40.171657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:41:40.172308 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:41:40.187560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:41:40.241603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:41:40.248417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:41:40.277294 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:41:40.367270 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (453) Dec 13 13:41:40.385263 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (448) Dec 13 13:41:40.406121 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:41:40.412584 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:41:40.418968 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:41:40.424217 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:41:40.425722 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:41:40.432393 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:41:40.443980 disk-uuid[507]: Primary Header is updated. Dec 13 13:41:40.443980 disk-uuid[507]: Secondary Entries is updated. Dec 13 13:41:40.443980 disk-uuid[507]: Secondary Header is updated. Dec 13 13:41:40.453716 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:41:41.483252 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:41:41.485356 disk-uuid[508]: The operation has completed successfully. Dec 13 13:41:41.560977 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:41:41.561122 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:41:41.586382 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:41:41.606083 sh[519]: Success Dec 13 13:41:41.629235 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 13:41:41.754087 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:41:41.774381 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:41:41.780371 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:41:41.818571 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:41:41.818662 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:41:41.821173 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:41:41.823558 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:41:41.823619 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:41:41.850075 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:41:41.852035 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:41:41.857446 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:41:41.866479 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:41:41.959368 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:41:41.959500 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:41:41.964256 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:41:41.974236 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:41:41.986803 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:41:41.989449 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:41:42.000105 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:41:42.009461 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:41:42.019960 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:41:42.030650 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:41:42.065337 systemd-networkd[701]: lo: Link UP Dec 13 13:41:42.065348 systemd-networkd[701]: lo: Gained carrier Dec 13 13:41:42.066574 systemd-networkd[701]: Enumeration completed Dec 13 13:41:42.067801 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:41:42.068026 systemd-networkd[701]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:41:42.068030 systemd-networkd[701]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:41:42.070523 systemd-networkd[701]: eth0: Link UP Dec 13 13:41:42.070528 systemd-networkd[701]: eth0: Gained carrier Dec 13 13:41:42.070538 systemd-networkd[701]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:41:42.071164 systemd[1]: Reached target network.target - Network. Dec 13 13:41:42.083408 systemd-networkd[701]: eth0: DHCPv4 address 172.24.4.155/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 13:41:42.144605 ignition[688]: Ignition 2.20.0 Dec 13 13:41:42.144618 ignition[688]: Stage: fetch-offline Dec 13 13:41:42.144658 ignition[688]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:41:42.145993 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:41:42.144669 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:41:42.144777 ignition[688]: parsed url from cmdline: "" Dec 13 13:41:42.144781 ignition[688]: no config URL provided Dec 13 13:41:42.144787 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:41:42.144797 ignition[688]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:41:42.144803 ignition[688]: failed to fetch config: resource requires networking Dec 13 13:41:42.145012 ignition[688]: Ignition finished successfully Dec 13 13:41:42.157563 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:41:42.170571 ignition[712]: Ignition 2.20.0 Dec 13 13:41:42.170584 ignition[712]: Stage: fetch Dec 13 13:41:42.170763 ignition[712]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:41:42.170775 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:41:42.170862 ignition[712]: parsed url from cmdline: "" Dec 13 13:41:42.170866 ignition[712]: no config URL provided Dec 13 13:41:42.170871 ignition[712]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:41:42.170880 ignition[712]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:41:42.170961 ignition[712]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 13:41:42.171082 ignition[712]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 13:41:42.171116 ignition[712]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 13:41:42.359259 ignition[712]: GET result: OK Dec 13 13:41:42.359392 ignition[712]: parsing config with SHA512: d5529f4290c291142d6457d7c1d241acfb8df56ba76c9121ad85e1f9ee46540f3f82e0677ae1b67668da90bd9d9e8b1d0e358f3ea50dc08fadfad9a8760e1e3d Dec 13 13:41:42.369403 unknown[712]: fetched base config from "system" Dec 13 13:41:42.370845 ignition[712]: fetch: fetch complete Dec 13 13:41:42.369431 unknown[712]: fetched base config from "system" Dec 13 13:41:42.370870 ignition[712]: fetch: fetch passed Dec 13 13:41:42.369460 unknown[712]: fetched user config from "openstack" Dec 13 13:41:42.373438 ignition[712]: Ignition finished successfully Dec 13 13:41:42.376187 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:41:42.393428 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:41:42.418728 ignition[718]: Ignition 2.20.0 Dec 13 13:41:42.418754 ignition[718]: Stage: kargs Dec 13 13:41:42.419130 ignition[718]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:41:42.419155 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:41:42.423061 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:41:42.421303 ignition[718]: kargs: kargs passed Dec 13 13:41:42.421393 ignition[718]: Ignition finished successfully Dec 13 13:41:42.433506 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:41:42.470278 ignition[724]: Ignition 2.20.0 Dec 13 13:41:42.471384 ignition[724]: Stage: disks Dec 13 13:41:42.471802 ignition[724]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:41:42.471849 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:41:42.474063 ignition[724]: disks: disks passed Dec 13 13:41:42.475818 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:41:42.474152 ignition[724]: Ignition finished successfully Dec 13 13:41:42.478622 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:41:42.480339 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:41:42.482582 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:41:42.485068 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:41:42.487630 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:41:42.499464 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:41:42.532008 systemd-fsck[733]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 13:41:42.542946 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:41:42.551396 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:41:42.707726 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:41:42.708157 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:41:42.709126 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:41:42.716303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:41:42.726696 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:41:42.729156 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:41:42.731998 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 13:41:42.733553 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:41:42.734457 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:41:42.736378 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:41:42.745388 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:41:42.748839 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (741) Dec 13 13:41:42.754550 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:41:42.754599 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:41:42.754614 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:41:42.766358 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:41:42.772983 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:41:42.976724 initrd-setup-root[769]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:41:43.061035 initrd-setup-root[776]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:41:43.078165 initrd-setup-root[783]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:41:43.094595 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:41:43.651105 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:41:43.661365 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:41:43.670693 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:41:43.689137 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:41:43.693006 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:41:43.731902 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:41:43.739715 ignition[859]: INFO : Ignition 2.20.0 Dec 13 13:41:43.739715 ignition[859]: INFO : Stage: mount Dec 13 13:41:43.743565 ignition[859]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:41:43.743565 ignition[859]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:41:43.743565 ignition[859]: INFO : mount: mount passed Dec 13 13:41:43.743565 ignition[859]: INFO : Ignition finished successfully Dec 13 13:41:43.742838 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:41:43.792387 systemd-networkd[701]: eth0: Gained IPv6LL Dec 13 13:41:50.594701 coreos-metadata[743]: Dec 13 13:41:50.594 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:41:50.637595 coreos-metadata[743]: Dec 13 13:41:50.637 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 13:41:50.654430 coreos-metadata[743]: Dec 13 13:41:50.654 INFO Fetch successful Dec 13 13:41:50.656635 coreos-metadata[743]: Dec 13 13:41:50.655 INFO wrote hostname ci-4186-0-0-c-ef2c5deb25.novalocal to /sysroot/etc/hostname Dec 13 13:41:50.658893 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 13:41:50.659142 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 13:41:50.672368 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:41:50.714572 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:41:50.732268 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (875) Dec 13 13:41:50.739225 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:41:50.739293 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:41:50.743191 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:41:50.754283 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:41:50.759864 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:41:50.805050 ignition[893]: INFO : Ignition 2.20.0 Dec 13 13:41:50.805050 ignition[893]: INFO : Stage: files Dec 13 13:41:50.807091 ignition[893]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:41:50.807091 ignition[893]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:41:50.810014 ignition[893]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:41:50.811620 ignition[893]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:41:50.811620 ignition[893]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:41:50.817122 ignition[893]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:41:50.818734 ignition[893]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:41:50.820562 unknown[893]: wrote ssh authorized keys file for user: core Dec 13 13:41:50.822016 ignition[893]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:41:50.825232 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:41:50.827051 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:41:50.889287 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:41:51.226852 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:41:51.226852 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:41:51.226852 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:41:51.226852 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:41:51.233446 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:41:51.233446 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:41:51.233446 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:41:51.233446 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:41:51.233446 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:41:51.233446 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:41:51.233446 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:41:51.233446 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:41:51.233446 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:41:51.233446 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:41:51.233446 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 13:41:51.752843 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 13:41:53.393642 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:41:53.393642 ignition[893]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 13:41:53.397502 ignition[893]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:41:53.398789 ignition[893]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:41:53.398789 ignition[893]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 13:41:53.398789 ignition[893]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:41:53.398789 ignition[893]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:41:53.406601 ignition[893]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:41:53.406601 ignition[893]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:41:53.406601 ignition[893]: INFO : files: files passed Dec 13 13:41:53.406601 ignition[893]: INFO : Ignition finished successfully Dec 13 13:41:53.402844 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:41:53.413738 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:41:53.416338 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:41:53.443335 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:41:53.444170 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:41:53.450806 initrd-setup-root-after-ignition[925]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:41:53.452615 initrd-setup-root-after-ignition[921]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:41:53.452615 initrd-setup-root-after-ignition[921]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:41:53.455911 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:41:53.457775 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:41:53.462554 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:41:53.513877 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:41:53.514099 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:41:53.516504 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:41:53.518356 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:41:53.520588 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:41:53.528524 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:41:53.547490 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:41:53.562938 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:41:53.584653 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:41:53.586484 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:41:53.588775 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:41:53.590659 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:41:53.590856 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:41:53.593023 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:41:53.593991 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:41:53.595998 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:41:53.597655 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:41:53.599237 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:41:53.601625 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:41:53.602268 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:41:53.602938 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:41:53.603778 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:41:53.605764 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:41:53.607512 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:41:53.607637 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:41:53.610535 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:41:53.611526 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:41:53.613431 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:41:53.613956 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:41:53.615158 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:41:53.615310 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:41:53.617495 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:41:53.617637 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:41:53.618634 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:41:53.618794 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:41:53.630307 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:41:53.635375 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:41:53.638580 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:41:53.638747 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:41:53.640914 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:41:53.641045 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:41:53.649473 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:41:53.649590 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:41:53.653521 ignition[945]: INFO : Ignition 2.20.0 Dec 13 13:41:53.653521 ignition[945]: INFO : Stage: umount Dec 13 13:41:53.655952 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:41:53.655952 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:41:53.655952 ignition[945]: INFO : umount: umount passed Dec 13 13:41:53.658655 ignition[945]: INFO : Ignition finished successfully Dec 13 13:41:53.659423 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:41:53.659526 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:41:53.660426 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:41:53.660533 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:41:53.661142 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:41:53.661190 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:41:53.662245 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:41:53.662290 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:41:53.663303 systemd[1]: Stopped target network.target - Network. Dec 13 13:41:53.664299 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:41:53.664347 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:41:53.665407 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:41:53.666367 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:41:53.668287 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:41:53.669069 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:41:53.670193 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:41:53.671392 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:41:53.671432 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:41:53.672357 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:41:53.672394 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:41:53.673538 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:41:53.673584 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:41:53.674828 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:41:53.674874 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:41:53.675940 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:41:53.677241 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:41:53.680270 systemd-networkd[701]: eth0: DHCPv6 lease lost Dec 13 13:41:53.682310 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:41:53.683274 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:41:53.684720 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:41:53.684764 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:41:53.691348 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:41:53.691942 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:41:53.692001 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:41:53.693878 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:41:53.698436 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:41:53.698543 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:41:53.704529 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:41:53.704620 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:41:53.705628 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:41:53.705673 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:41:53.706319 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:41:53.706364 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:41:53.708526 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:41:53.708676 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:41:53.711678 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:41:53.711765 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:41:53.717153 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:41:53.717232 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:41:53.718476 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:41:53.718509 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:41:53.719454 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:41:53.719497 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:41:53.720916 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:41:53.720960 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:41:53.722066 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:41:53.722108 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:41:53.728448 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:41:53.729077 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:41:53.729146 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:41:53.729751 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:41:53.729796 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:41:53.732476 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:41:53.732520 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:41:53.735505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:41:53.735552 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:41:53.738363 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:41:53.738446 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:41:53.807920 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:41:53.884132 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:41:53.884311 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:41:53.886897 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:41:53.887875 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:41:53.887934 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:41:53.902399 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:41:53.998581 systemd[1]: Switching root. Dec 13 13:41:54.099128 systemd-journald[185]: Journal stopped Dec 13 13:41:55.991526 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 13:41:55.991610 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:41:55.991632 kernel: SELinux: policy capability open_perms=1 Dec 13 13:41:55.991648 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:41:55.991660 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:41:55.991678 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:41:55.991690 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:41:55.991701 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:41:55.991716 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:41:55.991727 kernel: audit: type=1403 audit(1734097314.931:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:41:55.991746 systemd[1]: Successfully loaded SELinux policy in 73.404ms. Dec 13 13:41:55.991772 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.192ms. Dec 13 13:41:55.991787 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:41:55.991800 systemd[1]: Detected virtualization kvm. Dec 13 13:41:55.991813 systemd[1]: Detected architecture x86-64. Dec 13 13:41:55.991831 systemd[1]: Detected first boot. Dec 13 13:41:55.991844 systemd[1]: Hostname set to . Dec 13 13:41:55.991857 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:41:55.991870 zram_generator::config[987]: No configuration found. Dec 13 13:41:55.991884 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:41:55.991897 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:41:55.991910 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:41:55.991922 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:41:55.991938 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:41:55.991953 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:41:55.991966 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:41:55.991979 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:41:55.991992 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:41:55.992005 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:41:55.992018 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:41:55.992030 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:41:55.992043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:41:55.992058 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:41:55.992072 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:41:55.992085 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:41:55.992098 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:41:55.992111 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:41:55.992123 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:41:55.992136 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:41:55.992149 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:41:55.992164 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:41:55.992177 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:41:55.992190 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:41:55.992225 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:41:55.992239 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:41:55.992254 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:41:55.992268 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:41:55.992284 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:41:55.992297 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:41:55.992310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:41:55.992323 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:41:55.992335 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:41:55.992348 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:41:55.992361 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:41:55.992373 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:41:55.992386 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:41:55.992401 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:41:55.992414 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:41:55.992427 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:41:55.992465 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:41:55.992479 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:41:55.992494 systemd[1]: Reached target machines.target - Containers. Dec 13 13:41:55.992508 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:41:55.992521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:41:55.992539 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:41:55.992553 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:41:55.992568 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:41:55.992582 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:41:55.992595 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:41:55.992609 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:41:55.992622 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:41:55.992636 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:41:55.992649 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:41:55.992666 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:41:55.992680 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:41:55.992694 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:41:55.992707 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:41:55.992721 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:41:55.992735 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:41:55.992748 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:41:55.992761 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:41:55.992775 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:41:55.992791 systemd[1]: Stopped verity-setup.service. Dec 13 13:41:55.992805 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:41:55.992819 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:41:55.992833 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:41:55.992846 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:41:55.992861 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:41:55.992912 systemd-journald[1080]: Collecting audit messages is disabled. Dec 13 13:41:55.992945 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:41:55.992958 kernel: fuse: init (API version 7.39) Dec 13 13:41:55.992972 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:41:55.992987 systemd-journald[1080]: Journal started Dec 13 13:41:55.993015 systemd-journald[1080]: Runtime Journal (/run/log/journal/ed4e303f863144a987fa7cc5ce677436) is 4.9M, max 39.3M, 34.4M free. Dec 13 13:41:55.686446 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:41:55.995369 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:41:55.705333 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:41:55.706095 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:41:55.997783 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:41:56.001243 kernel: loop: module loaded Dec 13 13:41:56.002554 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:41:56.003189 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:41:56.004480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:41:56.005275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:41:56.005979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:41:56.006107 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:41:56.007790 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:41:56.008260 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:41:56.009532 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:41:56.011477 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:41:56.019877 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:41:56.020298 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:41:56.022647 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:41:56.031002 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:41:56.041996 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:41:56.052287 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:41:56.052894 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:41:56.052935 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:41:56.057577 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:41:56.061912 kernel: ACPI: bus type drm_connector registered Dec 13 13:41:56.061995 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:41:56.064614 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:41:56.065555 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:41:56.073437 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:41:56.075343 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:41:56.075939 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:41:56.078362 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:41:56.078909 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:41:56.082400 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:41:56.085354 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:41:56.088349 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:41:56.092832 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:41:56.093719 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:41:56.094283 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:41:56.095500 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:41:56.096441 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:41:56.098373 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:41:56.138660 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:41:56.139922 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:41:56.147600 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:41:56.162227 kernel: loop0: detected capacity change from 0 to 138184 Dec 13 13:41:56.166875 systemd-journald[1080]: Time spent on flushing to /var/log/journal/ed4e303f863144a987fa7cc5ce677436 is 50.301ms for 941 entries. Dec 13 13:41:56.166875 systemd-journald[1080]: System Journal (/var/log/journal/ed4e303f863144a987fa7cc5ce677436) is 8.0M, max 584.8M, 576.8M free. Dec 13 13:41:56.230438 systemd-journald[1080]: Received client request to flush runtime journal. Dec 13 13:41:56.230489 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:41:56.171680 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:41:56.186404 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:41:56.230863 udevadm[1130]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:41:56.233758 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:41:56.253073 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:41:56.260006 systemd-tmpfiles[1119]: ACLs are not supported, ignoring. Dec 13 13:41:56.260026 systemd-tmpfiles[1119]: ACLs are not supported, ignoring. Dec 13 13:41:56.266059 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:41:56.272361 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:41:56.301228 kernel: loop1: detected capacity change from 0 to 141000 Dec 13 13:41:56.577093 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:41:56.580192 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:41:56.826558 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:41:56.833432 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:41:56.860746 kernel: loop2: detected capacity change from 0 to 210664 Dec 13 13:41:56.899190 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Dec 13 13:41:56.899246 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Dec 13 13:41:56.905789 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:41:56.937377 kernel: loop3: detected capacity change from 0 to 8 Dec 13 13:41:56.959246 kernel: loop4: detected capacity change from 0 to 138184 Dec 13 13:41:57.063218 kernel: loop5: detected capacity change from 0 to 141000 Dec 13 13:41:57.112245 kernel: loop6: detected capacity change from 0 to 210664 Dec 13 13:41:57.159224 kernel: loop7: detected capacity change from 0 to 8 Dec 13 13:41:57.159522 (sd-merge)[1149]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 13:41:57.160818 (sd-merge)[1149]: Merged extensions into '/usr'. Dec 13 13:41:57.168094 systemd[1]: Reloading requested from client PID 1118 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:41:57.168362 systemd[1]: Reloading... Dec 13 13:41:57.266309 zram_generator::config[1174]: No configuration found. Dec 13 13:41:57.483172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:41:57.542142 systemd[1]: Reloading finished in 373 ms. Dec 13 13:41:57.569528 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:41:57.570614 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:41:57.578327 systemd[1]: Starting ensure-sysext.service... Dec 13 13:41:57.581337 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:41:57.596352 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:41:57.604917 systemd[1]: Reloading requested from client PID 1231 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:41:57.604936 systemd[1]: Reloading... Dec 13 13:41:57.623616 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:41:57.625715 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:41:57.626583 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:41:57.626871 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 13 13:41:57.626929 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 13 13:41:57.634970 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:41:57.635278 systemd-tmpfiles[1232]: Skipping /boot Dec 13 13:41:57.644954 systemd-udevd[1233]: Using default interface naming scheme 'v255'. Dec 13 13:41:57.653145 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:41:57.653327 systemd-tmpfiles[1232]: Skipping /boot Dec 13 13:41:57.669759 ldconfig[1113]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:41:57.695262 zram_generator::config[1258]: No configuration found. Dec 13 13:41:57.779232 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1290) Dec 13 13:41:57.805226 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1277) Dec 13 13:41:57.816218 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1277) Dec 13 13:41:57.867231 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 13:41:57.873233 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:41:57.886220 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 13:41:57.914115 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:41:57.936757 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 13:41:57.961125 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:41:57.980875 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 13:41:57.980953 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 13:41:57.984589 kernel: Console: switching to colour dummy device 80x25 Dec 13 13:41:57.985706 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 13:41:57.985741 kernel: [drm] features: -context_init Dec 13 13:41:57.987075 kernel: [drm] number of scanouts: 1 Dec 13 13:41:57.987113 kernel: [drm] number of cap sets: 0 Dec 13 13:41:57.991243 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 13:41:57.998356 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 13:41:57.998429 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 13:41:58.000076 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:41:58.000828 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:41:58.001231 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 13:41:58.003471 systemd[1]: Reloading finished in 398 ms. Dec 13 13:41:58.020057 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:41:58.022773 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:41:58.026525 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:41:58.049601 systemd[1]: Finished ensure-sysext.service. Dec 13 13:41:58.066513 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:41:58.077320 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:41:58.082342 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:41:58.082550 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:41:58.085353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:41:58.089456 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:41:58.092028 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:41:58.094316 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:41:58.094505 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:41:58.098377 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:41:58.104732 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:41:58.112470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:41:58.118443 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:41:58.126244 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:41:58.128776 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:41:58.134405 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:41:58.135702 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:41:58.136470 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:41:58.136783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:41:58.136903 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:41:58.137168 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:41:58.139596 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:41:58.142535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:41:58.142670 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:41:58.147357 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:41:58.147512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:41:58.148283 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:41:58.171514 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:41:58.174299 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:41:58.174535 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:41:58.187717 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:41:58.191691 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:41:58.208660 lvm[1381]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:41:58.236242 augenrules[1390]: No rules Dec 13 13:41:58.237306 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:41:58.237560 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:41:58.251424 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:41:58.254610 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:41:58.259084 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:41:58.269418 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:41:58.273710 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:41:58.294342 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:41:58.313515 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:41:58.324559 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:41:58.348553 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:41:58.351724 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:41:58.362264 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:41:58.371447 systemd-networkd[1357]: lo: Link UP Dec 13 13:41:58.371454 systemd-networkd[1357]: lo: Gained carrier Dec 13 13:41:58.373857 systemd-networkd[1357]: Enumeration completed Dec 13 13:41:58.375324 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:41:58.379499 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:41:58.379574 systemd-networkd[1357]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:41:58.381453 systemd-networkd[1357]: eth0: Link UP Dec 13 13:41:58.381656 systemd-networkd[1357]: eth0: Gained carrier Dec 13 13:41:58.381739 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:41:58.385881 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:41:58.388189 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:41:58.395258 systemd-networkd[1357]: eth0: DHCPv4 address 172.24.4.155/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 13:41:58.397786 systemd-resolved[1361]: Positive Trust Anchors: Dec 13 13:41:58.397799 systemd-resolved[1361]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:41:58.397841 systemd-resolved[1361]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:41:58.405863 systemd-resolved[1361]: Using system hostname 'ci-4186-0-0-c-ef2c5deb25.novalocal'. Dec 13 13:41:58.406288 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:41:58.409078 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:41:58.409552 systemd[1]: Reached target network.target - Network. Dec 13 13:41:58.409947 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:41:58.411785 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:41:58.413397 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:41:58.415176 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:41:58.416618 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:41:58.417992 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:41:58.418098 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:41:58.419432 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:41:58.420910 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:41:58.422309 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:41:58.423646 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:41:58.426994 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:41:58.431514 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:41:58.437655 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:41:58.438889 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:41:58.441392 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:41:58.441837 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:41:58.444129 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:41:58.444169 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:41:58.448279 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:41:58.451721 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 13:41:59.738247 systemd-resolved[1361]: Clock change detected. Flushing caches. Dec 13 13:41:59.738410 systemd-timesyncd[1363]: Contacted time server 37.59.63.125:123 (0.flatcar.pool.ntp.org). Dec 13 13:41:59.738619 systemd-timesyncd[1363]: Initial clock synchronization to Fri 2024-12-13 13:41:59.738197 UTC. Dec 13 13:41:59.740756 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:41:59.745225 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:41:59.753792 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:41:59.755100 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:41:59.759721 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:41:59.763954 jq[1421]: false Dec 13 13:41:59.769615 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:41:59.776341 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:41:59.784697 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:41:59.797659 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:41:59.798698 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:41:59.799220 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:41:59.801669 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:41:59.802191 dbus-daemon[1419]: [system] SELinux support is enabled Dec 13 13:41:59.806816 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:41:59.807934 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:41:59.816789 extend-filesystems[1423]: Found loop4 Dec 13 13:41:59.825990 extend-filesystems[1423]: Found loop5 Dec 13 13:41:59.825990 extend-filesystems[1423]: Found loop6 Dec 13 13:41:59.825990 extend-filesystems[1423]: Found loop7 Dec 13 13:41:59.825990 extend-filesystems[1423]: Found vda Dec 13 13:41:59.825990 extend-filesystems[1423]: Found vda1 Dec 13 13:41:59.825990 extend-filesystems[1423]: Found vda2 Dec 13 13:41:59.825990 extend-filesystems[1423]: Found vda3 Dec 13 13:41:59.825990 extend-filesystems[1423]: Found usr Dec 13 13:41:59.825990 extend-filesystems[1423]: Found vda4 Dec 13 13:41:59.825990 extend-filesystems[1423]: Found vda6 Dec 13 13:41:59.825990 extend-filesystems[1423]: Found vda7 Dec 13 13:41:59.825990 extend-filesystems[1423]: Found vda9 Dec 13 13:41:59.825990 extend-filesystems[1423]: Checking size of /dev/vda9 Dec 13 13:41:59.819884 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:41:59.860712 jq[1438]: true Dec 13 13:41:59.820063 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:41:59.820320 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:41:59.820469 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:41:59.840894 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:41:59.841041 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:41:59.863924 extend-filesystems[1423]: Resized partition /dev/vda9 Dec 13 13:41:59.882045 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:41:59.886162 update_engine[1437]: I20241213 13:41:59.875388 1437 main.cc:92] Flatcar Update Engine starting Dec 13 13:41:59.886162 update_engine[1437]: I20241213 13:41:59.881667 1437 update_check_scheduler.cc:74] Next update check in 5m5s Dec 13 13:41:59.890752 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 13:41:59.867200 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:41:59.867246 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:41:59.887936 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:41:59.887960 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:41:59.890900 (ntainerd)[1447]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:41:59.899215 tar[1443]: linux-amd64/helm Dec 13 13:41:59.898625 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:41:59.899742 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:41:59.907468 jq[1445]: true Dec 13 13:41:59.908669 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:41:59.967115 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1284) Dec 13 13:41:59.986197 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 13:42:00.008785 systemd-logind[1435]: New seat seat0. Dec 13 13:42:00.087927 systemd-logind[1435]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 13:42:00.088029 systemd-logind[1435]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:42:00.088344 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:42:00.097458 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:42:00.097458 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 13:42:00.097458 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 13:42:00.115367 extend-filesystems[1423]: Resized filesystem in /dev/vda9 Dec 13 13:42:00.098949 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:42:00.099556 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:42:00.125913 bash[1476]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:42:00.127202 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:42:00.140781 systemd[1]: Starting sshkeys.service... Dec 13 13:42:00.163300 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:42:00.179044 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 13:42:00.190005 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 13:42:00.377540 containerd[1447]: time="2024-12-13T13:42:00.376247681Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:42:00.378349 sshd_keygen[1440]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:42:00.417364 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:42:00.423391 containerd[1447]: time="2024-12-13T13:42:00.423198116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:42:00.428544 containerd[1447]: time="2024-12-13T13:42:00.425615821Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:42:00.428544 containerd[1447]: time="2024-12-13T13:42:00.425648933Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:42:00.428544 containerd[1447]: time="2024-12-13T13:42:00.425667388Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:42:00.428544 containerd[1447]: time="2024-12-13T13:42:00.425835333Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:42:00.428544 containerd[1447]: time="2024-12-13T13:42:00.425864878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:42:00.428544 containerd[1447]: time="2024-12-13T13:42:00.425930571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:42:00.428544 containerd[1447]: time="2024-12-13T13:42:00.425945439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:42:00.428544 containerd[1447]: time="2024-12-13T13:42:00.426096462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:42:00.428544 containerd[1447]: time="2024-12-13T13:42:00.426113164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:42:00.428544 containerd[1447]: time="2024-12-13T13:42:00.426127330Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:42:00.428544 containerd[1447]: time="2024-12-13T13:42:00.426137519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:42:00.428953 containerd[1447]: time="2024-12-13T13:42:00.426222509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:42:00.428953 containerd[1447]: time="2024-12-13T13:42:00.426427042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:42:00.428953 containerd[1447]: time="2024-12-13T13:42:00.426542419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:42:00.428953 containerd[1447]: time="2024-12-13T13:42:00.426558489Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:42:00.428953 containerd[1447]: time="2024-12-13T13:42:00.426633760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:42:00.428953 containerd[1447]: time="2024-12-13T13:42:00.426684525Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:42:00.429913 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:42:00.436435 systemd[1]: Started sshd@0-172.24.4.155:22-172.24.4.1:47118.service - OpenSSH per-connection server daemon (172.24.4.1:47118). Dec 13 13:42:00.443037 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:42:00.443227 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:42:00.448547 containerd[1447]: time="2024-12-13T13:42:00.447918870Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:42:00.449786 containerd[1447]: time="2024-12-13T13:42:00.448741313Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:42:00.449786 containerd[1447]: time="2024-12-13T13:42:00.448766620Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:42:00.449786 containerd[1447]: time="2024-12-13T13:42:00.449553797Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:42:00.449786 containerd[1447]: time="2024-12-13T13:42:00.449578984Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:42:00.449786 containerd[1447]: time="2024-12-13T13:42:00.449717564Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.451943288Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.452077059Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.452095023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.452112906Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.452128004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.452142071Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.452155957Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.452170364Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.452201292Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.452217262Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.452232210Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.452246687Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.452268147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.452316 containerd[1447]: time="2024-12-13T13:42:00.452282214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.453007 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:42:00.459541 containerd[1447]: time="2024-12-13T13:42:00.452298985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459541 containerd[1447]: time="2024-12-13T13:42:00.459424509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459541 containerd[1447]: time="2024-12-13T13:42:00.459452632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459541 containerd[1447]: time="2024-12-13T13:42:00.459469343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459541 containerd[1447]: time="2024-12-13T13:42:00.459483089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459541 containerd[1447]: time="2024-12-13T13:42:00.459498257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459541 containerd[1447]: time="2024-12-13T13:42:00.459532792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459727 containerd[1447]: time="2024-12-13T13:42:00.459553751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459727 containerd[1447]: time="2024-12-13T13:42:00.459567347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459727 containerd[1447]: time="2024-12-13T13:42:00.459582736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459727 containerd[1447]: time="2024-12-13T13:42:00.459596462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459727 containerd[1447]: time="2024-12-13T13:42:00.459612572Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:42:00.459727 containerd[1447]: time="2024-12-13T13:42:00.459641115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459727 containerd[1447]: time="2024-12-13T13:42:00.459655212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459727 containerd[1447]: time="2024-12-13T13:42:00.459666533Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:42:00.459886 containerd[1447]: time="2024-12-13T13:42:00.459781318Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:42:00.459886 containerd[1447]: time="2024-12-13T13:42:00.459805724Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:42:00.459886 containerd[1447]: time="2024-12-13T13:42:00.459817356Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:42:00.459886 containerd[1447]: time="2024-12-13T13:42:00.459829859Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:42:00.459886 containerd[1447]: time="2024-12-13T13:42:00.459840179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.459886 containerd[1447]: time="2024-12-13T13:42:00.459853423Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:42:00.459886 containerd[1447]: time="2024-12-13T13:42:00.459864975Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:42:00.459886 containerd[1447]: time="2024-12-13T13:42:00.459877439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:42:00.461551 containerd[1447]: time="2024-12-13T13:42:00.460176319Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:42:00.461551 containerd[1447]: time="2024-12-13T13:42:00.460237293Z" level=info msg="Connect containerd service" Dec 13 13:42:00.461551 containerd[1447]: time="2024-12-13T13:42:00.460275826Z" level=info msg="using legacy CRI server" Dec 13 13:42:00.461551 containerd[1447]: time="2024-12-13T13:42:00.460283540Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:42:00.461551 containerd[1447]: time="2024-12-13T13:42:00.460383277Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:42:00.461551 containerd[1447]: time="2024-12-13T13:42:00.461127854Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:42:00.461551 containerd[1447]: time="2024-12-13T13:42:00.461257437Z" level=info msg="Start subscribing containerd event" Dec 13 13:42:00.461551 containerd[1447]: time="2024-12-13T13:42:00.461298373Z" level=info msg="Start recovering state" Dec 13 13:42:00.461551 containerd[1447]: time="2024-12-13T13:42:00.461350261Z" level=info msg="Start event monitor" Dec 13 13:42:00.461551 containerd[1447]: time="2024-12-13T13:42:00.461360861Z" level=info msg="Start snapshots syncer" Dec 13 13:42:00.461551 containerd[1447]: time="2024-12-13T13:42:00.461368735Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:42:00.461551 containerd[1447]: time="2024-12-13T13:42:00.461376771Z" level=info msg="Start streaming server" Dec 13 13:42:00.461979 containerd[1447]: time="2024-12-13T13:42:00.461812127Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:42:00.461979 containerd[1447]: time="2024-12-13T13:42:00.461859776Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:42:00.463153 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:42:00.466872 containerd[1447]: time="2024-12-13T13:42:00.465673269Z" level=info msg="containerd successfully booted in 0.090345s" Dec 13 13:42:00.488880 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:42:00.501063 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:42:00.511020 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:42:00.513266 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:42:00.708853 tar[1443]: linux-amd64/LICENSE Dec 13 13:42:00.709031 tar[1443]: linux-amd64/README.md Dec 13 13:42:00.728151 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:42:01.518021 systemd-networkd[1357]: eth0: Gained IPv6LL Dec 13 13:42:01.525457 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:42:01.529453 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:42:01.543058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:42:01.551705 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:42:01.605946 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:42:01.626816 sshd[1504]: Accepted publickey for core from 172.24.4.1 port 47118 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:42:01.629773 sshd-session[1504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:42:01.656487 systemd-logind[1435]: New session 1 of user core. Dec 13 13:42:01.657345 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:42:01.668390 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:42:01.682862 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:42:01.694463 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:42:01.698160 (systemd)[1531]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:42:01.814742 systemd[1531]: Queued start job for default target default.target. Dec 13 13:42:01.819823 systemd[1531]: Created slice app.slice - User Application Slice. Dec 13 13:42:01.819849 systemd[1531]: Reached target paths.target - Paths. Dec 13 13:42:01.819865 systemd[1531]: Reached target timers.target - Timers. Dec 13 13:42:01.821168 systemd[1531]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:42:01.842957 systemd[1531]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:42:01.843819 systemd[1531]: Reached target sockets.target - Sockets. Dec 13 13:42:01.843836 systemd[1531]: Reached target basic.target - Basic System. Dec 13 13:42:01.843877 systemd[1531]: Reached target default.target - Main User Target. Dec 13 13:42:01.843902 systemd[1531]: Startup finished in 138ms. Dec 13 13:42:01.844328 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:42:01.852931 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:42:02.255165 systemd[1]: Started sshd@1-172.24.4.155:22-172.24.4.1:47126.service - OpenSSH per-connection server daemon (172.24.4.1:47126). Dec 13 13:42:03.662390 sshd[1542]: Accepted publickey for core from 172.24.4.1 port 47126 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:42:03.665652 sshd-session[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:42:03.679774 systemd-logind[1435]: New session 2 of user core. Dec 13 13:42:03.690670 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:42:03.943843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:42:03.944180 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:42:04.532409 sshd[1546]: Connection closed by 172.24.4.1 port 47126 Dec 13 13:42:04.534922 sshd-session[1542]: pam_unix(sshd:session): session closed for user core Dec 13 13:42:04.544788 systemd[1]: sshd@1-172.24.4.155:22-172.24.4.1:47126.service: Deactivated successfully. Dec 13 13:42:04.547184 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:42:04.548885 systemd-logind[1435]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:42:04.557640 systemd[1]: Started sshd@2-172.24.4.155:22-172.24.4.1:58074.service - OpenSSH per-connection server daemon (172.24.4.1:58074). Dec 13 13:42:04.565024 systemd-logind[1435]: Removed session 2. Dec 13 13:42:05.537207 agetty[1510]: failed to open credentials directory Dec 13 13:42:05.541274 agetty[1512]: failed to open credentials directory Dec 13 13:42:05.554660 login[1510]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:42:05.563465 systemd-logind[1435]: New session 3 of user core. Dec 13 13:42:05.571699 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:42:05.582489 login[1512]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:42:05.592972 systemd-logind[1435]: New session 4 of user core. Dec 13 13:42:05.600402 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:42:05.743015 kubelet[1552]: E1213 13:42:05.742869 1552 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:42:05.745834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:42:05.746082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:42:05.746510 systemd[1]: kubelet.service: Consumed 1.930s CPU time. Dec 13 13:42:06.123300 sshd[1561]: Accepted publickey for core from 172.24.4.1 port 58074 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:42:06.125985 sshd-session[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:42:06.136680 systemd-logind[1435]: New session 5 of user core. Dec 13 13:42:06.144919 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:42:06.764642 sshd[1590]: Connection closed by 172.24.4.1 port 58074 Dec 13 13:42:06.765796 sshd-session[1561]: pam_unix(sshd:session): session closed for user core Dec 13 13:42:06.771761 systemd[1]: sshd@2-172.24.4.155:22-172.24.4.1:58074.service: Deactivated successfully. Dec 13 13:42:06.775429 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:42:06.779117 systemd-logind[1435]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:42:06.781575 systemd-logind[1435]: Removed session 5. Dec 13 13:42:06.820022 coreos-metadata[1418]: Dec 13 13:42:06.819 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:42:06.869542 coreos-metadata[1418]: Dec 13 13:42:06.869 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 13:42:07.081842 coreos-metadata[1418]: Dec 13 13:42:07.081 INFO Fetch successful Dec 13 13:42:07.081985 coreos-metadata[1418]: Dec 13 13:42:07.081 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 13:42:07.093186 coreos-metadata[1418]: Dec 13 13:42:07.093 INFO Fetch successful Dec 13 13:42:07.093186 coreos-metadata[1418]: Dec 13 13:42:07.093 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 13:42:07.109813 coreos-metadata[1418]: Dec 13 13:42:07.109 INFO Fetch successful Dec 13 13:42:07.110034 coreos-metadata[1418]: Dec 13 13:42:07.109 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 13:42:07.125649 coreos-metadata[1418]: Dec 13 13:42:07.125 INFO Fetch successful Dec 13 13:42:07.125649 coreos-metadata[1418]: Dec 13 13:42:07.125 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 13:42:07.140315 coreos-metadata[1418]: Dec 13 13:42:07.140 INFO Fetch successful Dec 13 13:42:07.140650 coreos-metadata[1418]: Dec 13 13:42:07.140 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 13:42:07.156779 coreos-metadata[1418]: Dec 13 13:42:07.156 INFO Fetch successful Dec 13 13:42:07.203956 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 13:42:07.205703 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:42:07.257003 coreos-metadata[1489]: Dec 13 13:42:07.256 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:42:07.300380 coreos-metadata[1489]: Dec 13 13:42:07.300 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 13:42:07.316566 coreos-metadata[1489]: Dec 13 13:42:07.316 INFO Fetch successful Dec 13 13:42:07.316566 coreos-metadata[1489]: Dec 13 13:42:07.316 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 13:42:07.330900 coreos-metadata[1489]: Dec 13 13:42:07.330 INFO Fetch successful Dec 13 13:42:07.540572 unknown[1489]: wrote ssh authorized keys file for user: core Dec 13 13:42:07.580609 update-ssh-keys[1604]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:42:07.581763 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 13:42:07.586125 systemd[1]: Finished sshkeys.service. Dec 13 13:42:07.591901 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:42:07.592471 systemd[1]: Startup finished in 1.244s (kernel) + 16.101s (initrd) + 11.452s (userspace) = 28.798s. Dec 13 13:42:15.997796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:42:16.008982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:42:16.183253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:42:16.187988 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:42:16.671145 kubelet[1614]: E1213 13:42:16.670997 1614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:42:16.678625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:42:16.678960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:42:16.790111 systemd[1]: Started sshd@3-172.24.4.155:22-172.24.4.1:60676.service - OpenSSH per-connection server daemon (172.24.4.1:60676). Dec 13 13:42:17.941792 sshd[1624]: Accepted publickey for core from 172.24.4.1 port 60676 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:42:17.944665 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:42:17.956037 systemd-logind[1435]: New session 6 of user core. Dec 13 13:42:17.962822 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:42:18.587221 sshd[1626]: Connection closed by 172.24.4.1 port 60676 Dec 13 13:42:18.584724 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Dec 13 13:42:18.602186 systemd[1]: sshd@3-172.24.4.155:22-172.24.4.1:60676.service: Deactivated successfully. Dec 13 13:42:18.605212 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:42:18.608083 systemd-logind[1435]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:42:18.615355 systemd[1]: Started sshd@4-172.24.4.155:22-172.24.4.1:60684.service - OpenSSH per-connection server daemon (172.24.4.1:60684). Dec 13 13:42:18.618680 systemd-logind[1435]: Removed session 6. Dec 13 13:42:19.819803 sshd[1631]: Accepted publickey for core from 172.24.4.1 port 60684 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:42:19.822648 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:42:19.832142 systemd-logind[1435]: New session 7 of user core. Dec 13 13:42:19.844883 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:42:20.928834 sshd[1633]: Connection closed by 172.24.4.1 port 60684 Dec 13 13:42:20.930182 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Dec 13 13:42:20.943248 systemd[1]: sshd@4-172.24.4.155:22-172.24.4.1:60684.service: Deactivated successfully. Dec 13 13:42:20.946463 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:42:20.948147 systemd-logind[1435]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:42:20.958179 systemd[1]: Started sshd@5-172.24.4.155:22-172.24.4.1:60700.service - OpenSSH per-connection server daemon (172.24.4.1:60700). Dec 13 13:42:20.961341 systemd-logind[1435]: Removed session 7. Dec 13 13:42:22.367619 sshd[1638]: Accepted publickey for core from 172.24.4.1 port 60700 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:42:22.370189 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:42:22.378501 systemd-logind[1435]: New session 8 of user core. Dec 13 13:42:22.397954 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:42:23.011612 sshd[1640]: Connection closed by 172.24.4.1 port 60700 Dec 13 13:42:23.012282 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Dec 13 13:42:23.025967 systemd[1]: sshd@5-172.24.4.155:22-172.24.4.1:60700.service: Deactivated successfully. Dec 13 13:42:23.029026 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:42:23.030665 systemd-logind[1435]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:42:23.038120 systemd[1]: Started sshd@6-172.24.4.155:22-172.24.4.1:60714.service - OpenSSH per-connection server daemon (172.24.4.1:60714). Dec 13 13:42:23.040560 systemd-logind[1435]: Removed session 8. Dec 13 13:42:24.418781 sshd[1645]: Accepted publickey for core from 172.24.4.1 port 60714 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:42:24.421503 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:42:24.431093 systemd-logind[1435]: New session 9 of user core. Dec 13 13:42:24.438838 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:42:24.858211 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:42:24.858967 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:42:25.577838 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:42:25.578443 (dockerd)[1666]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:42:26.129772 dockerd[1666]: time="2024-12-13T13:42:26.128614889Z" level=info msg="Starting up" Dec 13 13:42:26.356542 dockerd[1666]: time="2024-12-13T13:42:26.355425112Z" level=info msg="Loading containers: start." Dec 13 13:42:26.648310 kernel: Initializing XFRM netlink socket Dec 13 13:42:26.738436 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:42:26.748835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:42:26.945586 systemd-networkd[1357]: docker0: Link UP Dec 13 13:42:27.308710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:42:27.318850 (kubelet)[1816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:42:27.585036 kubelet[1816]: E1213 13:42:27.583283 1816 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:42:27.590078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:42:27.590686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:42:27.592867 dockerd[1666]: time="2024-12-13T13:42:27.592707975Z" level=info msg="Loading containers: done." Dec 13 13:42:27.653844 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3983110801-merged.mount: Deactivated successfully. Dec 13 13:42:27.699824 dockerd[1666]: time="2024-12-13T13:42:27.699652740Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:42:27.700131 dockerd[1666]: time="2024-12-13T13:42:27.699865429Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:42:27.700131 dockerd[1666]: time="2024-12-13T13:42:27.700098065Z" level=info msg="Daemon has completed initialization" Dec 13 13:42:27.766086 dockerd[1666]: time="2024-12-13T13:42:27.764855037Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:42:27.765075 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:42:29.653672 containerd[1447]: time="2024-12-13T13:42:29.653545285Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 13:42:30.418060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4271442666.mount: Deactivated successfully. Dec 13 13:42:32.576790 containerd[1447]: time="2024-12-13T13:42:32.576713850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:32.578258 containerd[1447]: time="2024-12-13T13:42:32.578118431Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675650" Dec 13 13:42:32.579312 containerd[1447]: time="2024-12-13T13:42:32.579241223Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:32.582739 containerd[1447]: time="2024-12-13T13:42:32.582665010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:32.584148 containerd[1447]: time="2024-12-13T13:42:32.583938394Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.93034575s" Dec 13 13:42:32.584148 containerd[1447]: time="2024-12-13T13:42:32.583980765Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 13:42:32.611480 containerd[1447]: time="2024-12-13T13:42:32.611209925Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 13:42:35.503350 containerd[1447]: time="2024-12-13T13:42:35.503212127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:35.504770 containerd[1447]: time="2024-12-13T13:42:35.504702308Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606417" Dec 13 13:42:35.505570 containerd[1447]: time="2024-12-13T13:42:35.505506810Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:35.508794 containerd[1447]: time="2024-12-13T13:42:35.508740698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:35.511023 containerd[1447]: time="2024-12-13T13:42:35.510205211Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.898952837s" Dec 13 13:42:35.511023 containerd[1447]: time="2024-12-13T13:42:35.510240878Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 13:42:35.536414 containerd[1447]: time="2024-12-13T13:42:35.536345030Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 13:42:37.728228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 13:42:37.737080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:42:38.066756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:42:38.084230 (kubelet)[1956]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:42:38.172960 kubelet[1956]: E1213 13:42:38.172862 1956 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:42:38.175695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:42:38.176023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:42:38.535256 containerd[1447]: time="2024-12-13T13:42:38.534854878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:38.537893 containerd[1447]: time="2024-12-13T13:42:38.537747602Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783043" Dec 13 13:42:38.538893 containerd[1447]: time="2024-12-13T13:42:38.538794660Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:38.547625 containerd[1447]: time="2024-12-13T13:42:38.547429924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:38.551098 containerd[1447]: time="2024-12-13T13:42:38.550793003Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 3.014386697s" Dec 13 13:42:38.551098 containerd[1447]: time="2024-12-13T13:42:38.550874736Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 13:42:38.606744 containerd[1447]: time="2024-12-13T13:42:38.606640589Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 13:42:40.004962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1957038883.mount: Deactivated successfully. Dec 13 13:42:41.070505 containerd[1447]: time="2024-12-13T13:42:41.069722259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:41.072773 containerd[1447]: time="2024-12-13T13:42:41.072563916Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Dec 13 13:42:41.076248 containerd[1447]: time="2024-12-13T13:42:41.075326445Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:41.081580 containerd[1447]: time="2024-12-13T13:42:41.081370206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:41.083887 containerd[1447]: time="2024-12-13T13:42:41.083407293Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.476681584s" Dec 13 13:42:41.083887 containerd[1447]: time="2024-12-13T13:42:41.083502191Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 13:42:41.139729 containerd[1447]: time="2024-12-13T13:42:41.139598121Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:42:41.729804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323986237.mount: Deactivated successfully. Dec 13 13:42:42.959845 containerd[1447]: time="2024-12-13T13:42:42.959795205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:42.961190 containerd[1447]: time="2024-12-13T13:42:42.961031045Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 13:42:42.962137 containerd[1447]: time="2024-12-13T13:42:42.962070137Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:42.965566 containerd[1447]: time="2024-12-13T13:42:42.965471676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:42.966903 containerd[1447]: time="2024-12-13T13:42:42.966755717Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.827083807s" Dec 13 13:42:42.966903 containerd[1447]: time="2024-12-13T13:42:42.966794349Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 13:42:42.991256 containerd[1447]: time="2024-12-13T13:42:42.991175228Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:42:43.547716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount854675404.mount: Deactivated successfully. Dec 13 13:42:43.555843 containerd[1447]: time="2024-12-13T13:42:43.555778229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:43.558129 containerd[1447]: time="2024-12-13T13:42:43.558057579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Dec 13 13:42:43.558639 containerd[1447]: time="2024-12-13T13:42:43.558484370Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:43.564493 containerd[1447]: time="2024-12-13T13:42:43.564314839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:43.567103 containerd[1447]: time="2024-12-13T13:42:43.566835483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 575.578521ms" Dec 13 13:42:43.567103 containerd[1447]: time="2024-12-13T13:42:43.566914040Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 13:42:43.622735 containerd[1447]: time="2024-12-13T13:42:43.622670083Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 13:42:44.242371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704947770.mount: Deactivated successfully. Dec 13 13:42:44.815561 update_engine[1437]: I20241213 13:42:44.814249 1437 update_attempter.cc:509] Updating boot flags... Dec 13 13:42:45.143627 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2079) Dec 13 13:42:45.306826 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2080) Dec 13 13:42:47.472939 containerd[1447]: time="2024-12-13T13:42:47.472860330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:47.474357 containerd[1447]: time="2024-12-13T13:42:47.474301515Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Dec 13 13:42:47.475475 containerd[1447]: time="2024-12-13T13:42:47.475412070Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:47.478888 containerd[1447]: time="2024-12-13T13:42:47.478820890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:42:47.480315 containerd[1447]: time="2024-12-13T13:42:47.480097065Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.857359967s" Dec 13 13:42:47.480315 containerd[1447]: time="2024-12-13T13:42:47.480127252Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 13:42:48.227897 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 13:42:48.240071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:42:48.723618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:42:48.733804 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:42:48.910040 kubelet[2142]: E1213 13:42:48.909961 2142 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:42:48.914791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:42:48.915054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:42:53.025673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:42:53.046385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:42:53.105234 systemd[1]: Reloading requested from client PID 2181 ('systemctl') (unit session-9.scope)... Dec 13 13:42:53.105782 systemd[1]: Reloading... Dec 13 13:42:53.258556 zram_generator::config[2216]: No configuration found. Dec 13 13:42:53.434492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:42:53.522825 systemd[1]: Reloading finished in 416 ms. Dec 13 13:42:53.587262 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 13:42:53.587454 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 13:42:53.588218 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:42:53.597295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:42:53.927853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:42:53.943105 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:42:54.097290 kubelet[2288]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:42:54.098367 kubelet[2288]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:42:54.098367 kubelet[2288]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:42:54.114721 kubelet[2288]: I1213 13:42:54.114620 2288 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:42:54.691920 kubelet[2288]: I1213 13:42:54.691852 2288 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:42:54.691920 kubelet[2288]: I1213 13:42:54.691910 2288 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:42:54.692700 kubelet[2288]: I1213 13:42:54.692670 2288 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:42:54.728433 kubelet[2288]: I1213 13:42:54.728393 2288 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:42:54.728924 kubelet[2288]: E1213 13:42:54.728907 2288 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.155:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:54.746244 kubelet[2288]: I1213 13:42:54.746222 2288 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:42:54.746941 kubelet[2288]: I1213 13:42:54.746626 2288 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:42:54.746941 kubelet[2288]: I1213 13:42:54.746660 2288 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-0-0-c-ef2c5deb25.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:42:54.748119 kubelet[2288]: I1213 13:42:54.747844 2288 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:42:54.748119 kubelet[2288]: I1213 13:42:54.747866 2288 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:42:54.748119 kubelet[2288]: I1213 13:42:54.747997 2288 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:42:54.749090 kubelet[2288]: I1213 13:42:54.749044 2288 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:42:54.749090 kubelet[2288]: I1213 13:42:54.749062 2288 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:42:54.749789 kubelet[2288]: W1213 13:42:54.749638 2288 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-0-0-c-ef2c5deb25.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:54.749789 kubelet[2288]: E1213 13:42:54.749721 2288 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-0-0-c-ef2c5deb25.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:54.750044 kubelet[2288]: I1213 13:42:54.749852 2288 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:42:54.750044 kubelet[2288]: I1213 13:42:54.749876 2288 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:42:54.757934 kubelet[2288]: W1213 13:42:54.757793 2288 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:54.757934 kubelet[2288]: E1213 13:42:54.757849 2288 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:54.759566 kubelet[2288]: I1213 13:42:54.758223 2288 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:42:54.761533 kubelet[2288]: I1213 13:42:54.760664 2288 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:42:54.761533 kubelet[2288]: W1213 13:42:54.760719 2288 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:42:54.761533 kubelet[2288]: I1213 13:42:54.761363 2288 server.go:1264] "Started kubelet" Dec 13 13:42:54.764500 kubelet[2288]: I1213 13:42:54.763920 2288 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:42:54.765991 kubelet[2288]: I1213 13:42:54.765947 2288 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:42:54.772730 kubelet[2288]: I1213 13:42:54.772667 2288 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:42:54.773083 kubelet[2288]: I1213 13:42:54.773068 2288 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:42:54.773653 kubelet[2288]: E1213 13:42:54.773531 2288 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.155:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.155:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-0-0-c-ef2c5deb25.novalocal.1810c05fec9648c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-0-0-c-ef2c5deb25.novalocal,UID:ci-4186-0-0-c-ef2c5deb25.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-0-0-c-ef2c5deb25.novalocal,},FirstTimestamp:2024-12-13 13:42:54.761339072 +0000 UTC m=+0.809547229,LastTimestamp:2024-12-13 13:42:54.761339072 +0000 UTC m=+0.809547229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-0-0-c-ef2c5deb25.novalocal,}" Dec 13 13:42:54.775799 kubelet[2288]: I1213 13:42:54.775783 2288 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:42:54.783000 kubelet[2288]: E1213 13:42:54.782973 2288 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186-0-0-c-ef2c5deb25.novalocal\" not found" Dec 13 13:42:54.783562 kubelet[2288]: I1213 13:42:54.783188 2288 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:42:54.783562 kubelet[2288]: I1213 13:42:54.783341 2288 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:42:54.783562 kubelet[2288]: I1213 13:42:54.783411 2288 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:42:54.784070 kubelet[2288]: W1213 13:42:54.784028 2288 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:54.784163 kubelet[2288]: E1213 13:42:54.784146 2288 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:54.784499 kubelet[2288]: E1213 13:42:54.784473 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-0-0-c-ef2c5deb25.novalocal?timeout=10s\": dial tcp 172.24.4.155:6443: connect: connection refused" interval="200ms" Dec 13 13:42:54.790053 kubelet[2288]: I1213 13:42:54.789638 2288 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:42:54.790053 kubelet[2288]: I1213 13:42:54.789737 2288 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:42:54.796546 kubelet[2288]: I1213 13:42:54.794306 2288 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:42:54.823174 kubelet[2288]: E1213 13:42:54.822956 2288 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.155:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.155:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-0-0-c-ef2c5deb25.novalocal.1810c05fec9648c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-0-0-c-ef2c5deb25.novalocal,UID:ci-4186-0-0-c-ef2c5deb25.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-0-0-c-ef2c5deb25.novalocal,},FirstTimestamp:2024-12-13 13:42:54.761339072 +0000 UTC m=+0.809547229,LastTimestamp:2024-12-13 13:42:54.761339072 +0000 UTC m=+0.809547229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-0-0-c-ef2c5deb25.novalocal,}" Dec 13 13:42:54.823401 kubelet[2288]: I1213 13:42:54.823195 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:42:54.827144 kubelet[2288]: I1213 13:42:54.827116 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:42:54.827626 kubelet[2288]: I1213 13:42:54.827612 2288 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:42:54.827725 kubelet[2288]: I1213 13:42:54.827714 2288 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:42:54.827871 kubelet[2288]: E1213 13:42:54.827849 2288 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:42:54.832391 kubelet[2288]: W1213 13:42:54.832328 2288 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:54.833224 kubelet[2288]: E1213 13:42:54.833205 2288 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:54.845807 kubelet[2288]: I1213 13:42:54.845772 2288 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:42:54.845807 kubelet[2288]: I1213 13:42:54.845807 2288 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:42:54.845973 kubelet[2288]: I1213 13:42:54.845837 2288 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:42:54.856014 kubelet[2288]: I1213 13:42:54.855966 2288 policy_none.go:49] "None policy: Start" Dec 13 13:42:54.857148 kubelet[2288]: I1213 13:42:54.857117 2288 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:42:54.857204 kubelet[2288]: I1213 13:42:54.857159 2288 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:42:54.878448 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:42:54.892891 kubelet[2288]: I1213 13:42:54.892406 2288 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:54.893007 kubelet[2288]: E1213 13:42:54.892977 2288 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.155:6443/api/v1/nodes\": dial tcp 172.24.4.155:6443: connect: connection refused" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:54.896250 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:42:54.900825 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:42:54.912851 kubelet[2288]: I1213 13:42:54.912813 2288 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:42:54.913411 kubelet[2288]: I1213 13:42:54.913130 2288 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:42:54.913411 kubelet[2288]: I1213 13:42:54.913388 2288 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:42:54.918140 kubelet[2288]: E1213 13:42:54.918112 2288 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-0-0-c-ef2c5deb25.novalocal\" not found" Dec 13 13:42:54.928676 kubelet[2288]: I1213 13:42:54.928610 2288 topology_manager.go:215] "Topology Admit Handler" podUID="ec0dc4a3a8ad73b8f5e9935089f660a7" podNamespace="kube-system" podName="kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:54.931125 kubelet[2288]: I1213 13:42:54.931082 2288 topology_manager.go:215] "Topology Admit Handler" podUID="d58c5cf39f56dbaa6178591c6d896010" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:54.933530 kubelet[2288]: I1213 13:42:54.933301 2288 topology_manager.go:215] "Topology Admit Handler" podUID="94966141fbfeebec5767f580797291bc" podNamespace="kube-system" podName="kube-scheduler-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:54.945649 systemd[1]: Created slice kubepods-burstable-podec0dc4a3a8ad73b8f5e9935089f660a7.slice - libcontainer container kubepods-burstable-podec0dc4a3a8ad73b8f5e9935089f660a7.slice. Dec 13 13:42:54.970382 systemd[1]: Created slice kubepods-burstable-podd58c5cf39f56dbaa6178591c6d896010.slice - libcontainer container kubepods-burstable-podd58c5cf39f56dbaa6178591c6d896010.slice. Dec 13 13:42:54.976789 systemd[1]: Created slice kubepods-burstable-pod94966141fbfeebec5767f580797291bc.slice - libcontainer container kubepods-burstable-pod94966141fbfeebec5767f580797291bc.slice. Dec 13 13:42:54.985047 kubelet[2288]: E1213 13:42:54.984990 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-0-0-c-ef2c5deb25.novalocal?timeout=10s\": dial tcp 172.24.4.155:6443: connect: connection refused" interval="400ms" Dec 13 13:42:54.991271 kubelet[2288]: I1213 13:42:54.991108 2288 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec0dc4a3a8ad73b8f5e9935089f660a7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"ec0dc4a3a8ad73b8f5e9935089f660a7\") " pod="kube-system/kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:54.991271 kubelet[2288]: I1213 13:42:54.991146 2288 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d58c5cf39f56dbaa6178591c6d896010-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"d58c5cf39f56dbaa6178591c6d896010\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:54.991271 kubelet[2288]: I1213 13:42:54.991169 2288 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94966141fbfeebec5767f580797291bc-kubeconfig\") pod \"kube-scheduler-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"94966141fbfeebec5767f580797291bc\") " pod="kube-system/kube-scheduler-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:54.991271 kubelet[2288]: I1213 13:42:54.991192 2288 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec0dc4a3a8ad73b8f5e9935089f660a7-k8s-certs\") pod \"kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"ec0dc4a3a8ad73b8f5e9935089f660a7\") " pod="kube-system/kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:54.991452 kubelet[2288]: I1213 13:42:54.991214 2288 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d58c5cf39f56dbaa6178591c6d896010-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"d58c5cf39f56dbaa6178591c6d896010\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:54.991452 kubelet[2288]: I1213 13:42:54.991345 2288 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d58c5cf39f56dbaa6178591c6d896010-k8s-certs\") pod \"kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"d58c5cf39f56dbaa6178591c6d896010\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:54.991452 kubelet[2288]: I1213 13:42:54.991434 2288 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d58c5cf39f56dbaa6178591c6d896010-kubeconfig\") pod \"kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"d58c5cf39f56dbaa6178591c6d896010\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:54.991569 kubelet[2288]: I1213 13:42:54.991482 2288 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec0dc4a3a8ad73b8f5e9935089f660a7-ca-certs\") pod \"kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"ec0dc4a3a8ad73b8f5e9935089f660a7\") " pod="kube-system/kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:54.991664 kubelet[2288]: I1213 13:42:54.991597 2288 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d58c5cf39f56dbaa6178591c6d896010-ca-certs\") pod \"kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"d58c5cf39f56dbaa6178591c6d896010\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:55.096444 kubelet[2288]: I1213 13:42:55.096289 2288 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:55.096882 kubelet[2288]: E1213 13:42:55.096856 2288 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.155:6443/api/v1/nodes\": dial tcp 172.24.4.155:6443: connect: connection refused" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:55.265055 containerd[1447]: time="2024-12-13T13:42:55.264874746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal,Uid:ec0dc4a3a8ad73b8f5e9935089f660a7,Namespace:kube-system,Attempt:0,}" Dec 13 13:42:55.275221 containerd[1447]: time="2024-12-13T13:42:55.275119571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal,Uid:d58c5cf39f56dbaa6178591c6d896010,Namespace:kube-system,Attempt:0,}" Dec 13 13:42:55.280655 containerd[1447]: time="2024-12-13T13:42:55.280166202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-0-0-c-ef2c5deb25.novalocal,Uid:94966141fbfeebec5767f580797291bc,Namespace:kube-system,Attempt:0,}" Dec 13 13:42:55.387186 kubelet[2288]: E1213 13:42:55.387027 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-0-0-c-ef2c5deb25.novalocal?timeout=10s\": dial tcp 172.24.4.155:6443: connect: connection refused" interval="800ms" Dec 13 13:42:55.500590 kubelet[2288]: I1213 13:42:55.500491 2288 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:55.501364 kubelet[2288]: E1213 13:42:55.501177 2288 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.155:6443/api/v1/nodes\": dial tcp 172.24.4.155:6443: connect: connection refused" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:55.647480 kubelet[2288]: W1213 13:42:55.647010 2288 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:55.647480 kubelet[2288]: E1213 13:42:55.647096 2288 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:55.806105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2607012233.mount: Deactivated successfully. Dec 13 13:42:55.820992 containerd[1447]: time="2024-12-13T13:42:55.820694926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:42:55.823886 containerd[1447]: time="2024-12-13T13:42:55.823809280Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:42:55.826840 containerd[1447]: time="2024-12-13T13:42:55.826690206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 13:42:55.828082 containerd[1447]: time="2024-12-13T13:42:55.827977493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:42:55.831851 containerd[1447]: time="2024-12-13T13:42:55.831620649Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:42:55.834183 containerd[1447]: time="2024-12-13T13:42:55.833945512Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:42:55.834969 containerd[1447]: time="2024-12-13T13:42:55.834842365Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:42:55.843961 containerd[1447]: time="2024-12-13T13:42:55.843866680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:42:55.846727 containerd[1447]: time="2024-12-13T13:42:55.846039628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.75625ms" Dec 13 13:42:55.853245 containerd[1447]: time="2024-12-13T13:42:55.851829162Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 586.724356ms" Dec 13 13:42:55.853245 containerd[1447]: time="2024-12-13T13:42:55.853101630Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 572.748487ms" Dec 13 13:42:56.025897 kubelet[2288]: W1213 13:42:56.025657 2288 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:56.025897 kubelet[2288]: E1213 13:42:56.025823 2288 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:56.087025 containerd[1447]: time="2024-12-13T13:42:56.086857430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:42:56.087025 containerd[1447]: time="2024-12-13T13:42:56.086923634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:42:56.087025 containerd[1447]: time="2024-12-13T13:42:56.086938853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:42:56.087675 containerd[1447]: time="2024-12-13T13:42:56.087040574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:42:56.088375 containerd[1447]: time="2024-12-13T13:42:56.088294286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:42:56.088507 containerd[1447]: time="2024-12-13T13:42:56.088346684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:42:56.088507 containerd[1447]: time="2024-12-13T13:42:56.088366241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:42:56.088762 containerd[1447]: time="2024-12-13T13:42:56.088538975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:42:56.122572 systemd[1]: Started cri-containerd-f03db6c43a9fecc3a6dc2ae72a297d39fd36bc9ff109dff9341efe060790327b.scope - libcontainer container f03db6c43a9fecc3a6dc2ae72a297d39fd36bc9ff109dff9341efe060790327b. Dec 13 13:42:56.128010 systemd[1]: Started cri-containerd-7be444f6a176945bd78c1319f3ce3b57d99a2f76b6ba2248e4e6838719377140.scope - libcontainer container 7be444f6a176945bd78c1319f3ce3b57d99a2f76b6ba2248e4e6838719377140. Dec 13 13:42:56.132155 containerd[1447]: time="2024-12-13T13:42:56.131869366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:42:56.132155 containerd[1447]: time="2024-12-13T13:42:56.132011663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:42:56.132155 containerd[1447]: time="2024-12-13T13:42:56.132035828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:42:56.137174 containerd[1447]: time="2024-12-13T13:42:56.132356300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:42:56.179761 systemd[1]: Started cri-containerd-f62044236d33d167f2755fa0e4147e21e52d71d97418cfad966b0c70b865153e.scope - libcontainer container f62044236d33d167f2755fa0e4147e21e52d71d97418cfad966b0c70b865153e. Dec 13 13:42:56.188425 kubelet[2288]: E1213 13:42:56.188133 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-0-0-c-ef2c5deb25.novalocal?timeout=10s\": dial tcp 172.24.4.155:6443: connect: connection refused" interval="1.6s" Dec 13 13:42:56.206574 containerd[1447]: time="2024-12-13T13:42:56.205761598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal,Uid:ec0dc4a3a8ad73b8f5e9935089f660a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7be444f6a176945bd78c1319f3ce3b57d99a2f76b6ba2248e4e6838719377140\"" Dec 13 13:42:56.215961 containerd[1447]: time="2024-12-13T13:42:56.215299195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-0-0-c-ef2c5deb25.novalocal,Uid:94966141fbfeebec5767f580797291bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f03db6c43a9fecc3a6dc2ae72a297d39fd36bc9ff109dff9341efe060790327b\"" Dec 13 13:42:56.217427 containerd[1447]: time="2024-12-13T13:42:56.217208027Z" level=info msg="CreateContainer within sandbox \"7be444f6a176945bd78c1319f3ce3b57d99a2f76b6ba2248e4e6838719377140\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:42:56.229553 containerd[1447]: time="2024-12-13T13:42:56.228955411Z" level=info msg="CreateContainer within sandbox \"f03db6c43a9fecc3a6dc2ae72a297d39fd36bc9ff109dff9341efe060790327b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:42:56.242467 kubelet[2288]: W1213 13:42:56.242361 2288 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-0-0-c-ef2c5deb25.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:56.242467 kubelet[2288]: E1213 13:42:56.242452 2288 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-0-0-c-ef2c5deb25.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:56.251600 containerd[1447]: time="2024-12-13T13:42:56.251557956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal,Uid:d58c5cf39f56dbaa6178591c6d896010,Namespace:kube-system,Attempt:0,} returns sandbox id \"f62044236d33d167f2755fa0e4147e21e52d71d97418cfad966b0c70b865153e\"" Dec 13 13:42:56.255409 containerd[1447]: time="2024-12-13T13:42:56.255358478Z" level=info msg="CreateContainer within sandbox \"f62044236d33d167f2755fa0e4147e21e52d71d97418cfad966b0c70b865153e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:42:56.277420 containerd[1447]: time="2024-12-13T13:42:56.277277278Z" level=info msg="CreateContainer within sandbox \"7be444f6a176945bd78c1319f3ce3b57d99a2f76b6ba2248e4e6838719377140\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"76c186aa07c50d178fe6580d51eed54c56b853907eddaeb68e32cccda88f392f\"" Dec 13 13:42:56.278742 containerd[1447]: time="2024-12-13T13:42:56.278700389Z" level=info msg="StartContainer for \"76c186aa07c50d178fe6580d51eed54c56b853907eddaeb68e32cccda88f392f\"" Dec 13 13:42:56.291187 containerd[1447]: time="2024-12-13T13:42:56.291135294Z" level=info msg="CreateContainer within sandbox \"f03db6c43a9fecc3a6dc2ae72a297d39fd36bc9ff109dff9341efe060790327b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"be7ffbbc34243aa87ebe7e6896c03bd9fb7671e1cb83cd2bfe2afe7b2dd8b241\"" Dec 13 13:42:56.292259 containerd[1447]: time="2024-12-13T13:42:56.292228916Z" level=info msg="StartContainer for \"be7ffbbc34243aa87ebe7e6896c03bd9fb7671e1cb83cd2bfe2afe7b2dd8b241\"" Dec 13 13:42:56.304453 kubelet[2288]: I1213 13:42:56.304419 2288 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:56.305318 kubelet[2288]: E1213 13:42:56.305270 2288 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.155:6443/api/v1/nodes\": dial tcp 172.24.4.155:6443: connect: connection refused" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:56.306689 systemd[1]: Started cri-containerd-76c186aa07c50d178fe6580d51eed54c56b853907eddaeb68e32cccda88f392f.scope - libcontainer container 76c186aa07c50d178fe6580d51eed54c56b853907eddaeb68e32cccda88f392f. Dec 13 13:42:56.316560 containerd[1447]: time="2024-12-13T13:42:56.316399884Z" level=info msg="CreateContainer within sandbox \"f62044236d33d167f2755fa0e4147e21e52d71d97418cfad966b0c70b865153e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c5086de13ce58c9e2dcc746b174f17dd07211a1868b04d3df281de0288d0260c\"" Dec 13 13:42:56.317785 containerd[1447]: time="2024-12-13T13:42:56.317680587Z" level=info msg="StartContainer for \"c5086de13ce58c9e2dcc746b174f17dd07211a1868b04d3df281de0288d0260c\"" Dec 13 13:42:56.342660 systemd[1]: Started cri-containerd-be7ffbbc34243aa87ebe7e6896c03bd9fb7671e1cb83cd2bfe2afe7b2dd8b241.scope - libcontainer container be7ffbbc34243aa87ebe7e6896c03bd9fb7671e1cb83cd2bfe2afe7b2dd8b241. Dec 13 13:42:56.346962 kubelet[2288]: W1213 13:42:56.346726 2288 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:56.346962 kubelet[2288]: E1213 13:42:56.346848 2288 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Dec 13 13:42:56.360058 systemd[1]: Started cri-containerd-c5086de13ce58c9e2dcc746b174f17dd07211a1868b04d3df281de0288d0260c.scope - libcontainer container c5086de13ce58c9e2dcc746b174f17dd07211a1868b04d3df281de0288d0260c. Dec 13 13:42:56.396915 containerd[1447]: time="2024-12-13T13:42:56.396869577Z" level=info msg="StartContainer for \"76c186aa07c50d178fe6580d51eed54c56b853907eddaeb68e32cccda88f392f\" returns successfully" Dec 13 13:42:56.436580 containerd[1447]: time="2024-12-13T13:42:56.436126544Z" level=info msg="StartContainer for \"be7ffbbc34243aa87ebe7e6896c03bd9fb7671e1cb83cd2bfe2afe7b2dd8b241\" returns successfully" Dec 13 13:42:56.436580 containerd[1447]: time="2024-12-13T13:42:56.436189291Z" level=info msg="StartContainer for \"c5086de13ce58c9e2dcc746b174f17dd07211a1868b04d3df281de0288d0260c\" returns successfully" Dec 13 13:42:57.910223 kubelet[2288]: I1213 13:42:57.909310 2288 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:58.848546 kubelet[2288]: E1213 13:42:58.848478 2288 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-0-0-c-ef2c5deb25.novalocal\" not found" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:58.998066 kubelet[2288]: I1213 13:42:58.997805 2288 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:42:59.764393 kubelet[2288]: I1213 13:42:59.762184 2288 apiserver.go:52] "Watching apiserver" Dec 13 13:42:59.783979 kubelet[2288]: I1213 13:42:59.783869 2288 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:43:01.331804 systemd[1]: Reloading requested from client PID 2561 ('systemctl') (unit session-9.scope)... Dec 13 13:43:01.331836 systemd[1]: Reloading... Dec 13 13:43:01.486565 zram_generator::config[2600]: No configuration found. Dec 13 13:43:01.647988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:43:01.760166 systemd[1]: Reloading finished in 427 ms. Dec 13 13:43:01.805548 kubelet[2288]: I1213 13:43:01.805321 2288 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:43:01.805486 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:43:01.819237 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:43:01.819545 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:43:01.819612 systemd[1]: kubelet.service: Consumed 1.309s CPU time, 112.9M memory peak, 0B memory swap peak. Dec 13 13:43:01.829838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:43:02.494886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:43:02.505238 (kubelet)[2665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:43:02.627452 kubelet[2665]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:43:02.627452 kubelet[2665]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:43:02.627452 kubelet[2665]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:43:02.627452 kubelet[2665]: I1213 13:43:02.627094 2665 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:43:02.635422 kubelet[2665]: I1213 13:43:02.632732 2665 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:43:02.635422 kubelet[2665]: I1213 13:43:02.632761 2665 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:43:02.635422 kubelet[2665]: I1213 13:43:02.634047 2665 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:43:02.642384 kubelet[2665]: I1213 13:43:02.642344 2665 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:43:02.647234 kubelet[2665]: I1213 13:43:02.647176 2665 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:43:02.656675 kubelet[2665]: I1213 13:43:02.656415 2665 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:43:02.660182 kubelet[2665]: I1213 13:43:02.660123 2665 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:43:02.660550 kubelet[2665]: I1213 13:43:02.660277 2665 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-0-0-c-ef2c5deb25.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:43:02.660758 kubelet[2665]: I1213 13:43:02.660744 2665 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:43:02.660825 kubelet[2665]: I1213 13:43:02.660817 2665 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:43:02.660936 kubelet[2665]: I1213 13:43:02.660926 2665 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:43:02.661176 kubelet[2665]: I1213 13:43:02.661145 2665 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:43:02.661274 kubelet[2665]: I1213 13:43:02.661263 2665 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:43:02.661380 kubelet[2665]: I1213 13:43:02.661370 2665 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:43:02.661477 kubelet[2665]: I1213 13:43:02.661467 2665 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:43:02.666618 kubelet[2665]: I1213 13:43:02.666596 2665 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:43:02.674821 kubelet[2665]: I1213 13:43:02.674791 2665 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:43:02.675378 kubelet[2665]: I1213 13:43:02.675364 2665 server.go:1264] "Started kubelet" Dec 13 13:43:02.679569 kubelet[2665]: I1213 13:43:02.677507 2665 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:43:02.692220 kubelet[2665]: I1213 13:43:02.692177 2665 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:43:02.693371 kubelet[2665]: I1213 13:43:02.693357 2665 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:43:02.696648 kubelet[2665]: I1213 13:43:02.696596 2665 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:43:02.699434 kubelet[2665]: I1213 13:43:02.696926 2665 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:43:02.715929 kubelet[2665]: I1213 13:43:02.704833 2665 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:43:02.716059 kubelet[2665]: I1213 13:43:02.704853 2665 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:43:02.716241 kubelet[2665]: I1213 13:43:02.716215 2665 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:43:02.725802 kubelet[2665]: I1213 13:43:02.725772 2665 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:43:02.727151 kubelet[2665]: I1213 13:43:02.726066 2665 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:43:02.730198 kubelet[2665]: I1213 13:43:02.730161 2665 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:43:02.730394 kubelet[2665]: I1213 13:43:02.730359 2665 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:43:02.734983 kubelet[2665]: E1213 13:43:02.734947 2665 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:43:02.735297 kubelet[2665]: I1213 13:43:02.735271 2665 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:43:02.735398 kubelet[2665]: I1213 13:43:02.735388 2665 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:43:02.735476 kubelet[2665]: I1213 13:43:02.735466 2665 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:43:02.735868 kubelet[2665]: E1213 13:43:02.735591 2665 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:43:02.791474 kubelet[2665]: I1213 13:43:02.791368 2665 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:43:02.791474 kubelet[2665]: I1213 13:43:02.791386 2665 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:43:02.791474 kubelet[2665]: I1213 13:43:02.791410 2665 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:43:02.791700 kubelet[2665]: I1213 13:43:02.791617 2665 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:43:02.791700 kubelet[2665]: I1213 13:43:02.791629 2665 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:43:02.791700 kubelet[2665]: I1213 13:43:02.791653 2665 policy_none.go:49] "None policy: Start" Dec 13 13:43:02.792805 kubelet[2665]: I1213 13:43:02.792482 2665 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:43:02.792805 kubelet[2665]: I1213 13:43:02.792581 2665 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:43:02.792805 kubelet[2665]: I1213 13:43:02.792743 2665 state_mem.go:75] "Updated machine memory state" Dec 13 13:43:02.801174 kubelet[2665]: I1213 13:43:02.799876 2665 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:43:02.801174 kubelet[2665]: I1213 13:43:02.801110 2665 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:43:02.801624 kubelet[2665]: I1213 13:43:02.801477 2665 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:43:02.826216 kubelet[2665]: I1213 13:43:02.826187 2665 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.837775 kubelet[2665]: I1213 13:43:02.836765 2665 topology_manager.go:215] "Topology Admit Handler" podUID="ec0dc4a3a8ad73b8f5e9935089f660a7" podNamespace="kube-system" podName="kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.837775 kubelet[2665]: I1213 13:43:02.836886 2665 topology_manager.go:215] "Topology Admit Handler" podUID="d58c5cf39f56dbaa6178591c6d896010" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.837775 kubelet[2665]: I1213 13:43:02.836945 2665 topology_manager.go:215] "Topology Admit Handler" podUID="94966141fbfeebec5767f580797291bc" podNamespace="kube-system" podName="kube-scheduler-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.855943 kubelet[2665]: W1213 13:43:02.855459 2665 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:43:02.861134 kubelet[2665]: W1213 13:43:02.859376 2665 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:43:02.861134 kubelet[2665]: W1213 13:43:02.860729 2665 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:43:02.861264 kubelet[2665]: I1213 13:43:02.861172 2665 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.861264 kubelet[2665]: I1213 13:43:02.861235 2665 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.923685 kubelet[2665]: I1213 13:43:02.923342 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec0dc4a3a8ad73b8f5e9935089f660a7-ca-certs\") pod \"kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"ec0dc4a3a8ad73b8f5e9935089f660a7\") " pod="kube-system/kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.923685 kubelet[2665]: I1213 13:43:02.923384 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d58c5cf39f56dbaa6178591c6d896010-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"d58c5cf39f56dbaa6178591c6d896010\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.923685 kubelet[2665]: I1213 13:43:02.923410 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d58c5cf39f56dbaa6178591c6d896010-kubeconfig\") pod \"kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"d58c5cf39f56dbaa6178591c6d896010\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.923685 kubelet[2665]: I1213 13:43:02.923433 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d58c5cf39f56dbaa6178591c6d896010-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"d58c5cf39f56dbaa6178591c6d896010\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.923908 kubelet[2665]: I1213 13:43:02.923467 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec0dc4a3a8ad73b8f5e9935089f660a7-k8s-certs\") pod \"kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"ec0dc4a3a8ad73b8f5e9935089f660a7\") " pod="kube-system/kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.923908 kubelet[2665]: I1213 13:43:02.923487 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec0dc4a3a8ad73b8f5e9935089f660a7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"ec0dc4a3a8ad73b8f5e9935089f660a7\") " pod="kube-system/kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.923908 kubelet[2665]: I1213 13:43:02.923522 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d58c5cf39f56dbaa6178591c6d896010-ca-certs\") pod \"kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"d58c5cf39f56dbaa6178591c6d896010\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.923908 kubelet[2665]: I1213 13:43:02.923544 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d58c5cf39f56dbaa6178591c6d896010-k8s-certs\") pod \"kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"d58c5cf39f56dbaa6178591c6d896010\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:02.923908 kubelet[2665]: I1213 13:43:02.923562 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94966141fbfeebec5767f580797291bc-kubeconfig\") pod \"kube-scheduler-ci-4186-0-0-c-ef2c5deb25.novalocal\" (UID: \"94966141fbfeebec5767f580797291bc\") " pod="kube-system/kube-scheduler-ci-4186-0-0-c-ef2c5deb25.novalocal" Dec 13 13:43:03.662555 kubelet[2665]: I1213 13:43:03.662490 2665 apiserver.go:52] "Watching apiserver" Dec 13 13:43:03.716844 kubelet[2665]: I1213 13:43:03.716808 2665 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:43:03.801325 kubelet[2665]: I1213 13:43:03.801042 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-0-0-c-ef2c5deb25.novalocal" podStartSLOduration=1.801011202 podStartE2EDuration="1.801011202s" podCreationTimestamp="2024-12-13 13:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:43:03.801004269 +0000 UTC m=+1.273913813" watchObservedRunningTime="2024-12-13 13:43:03.801011202 +0000 UTC m=+1.273920776" Dec 13 13:43:03.843436 kubelet[2665]: I1213 13:43:03.842978 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-0-0-c-ef2c5deb25.novalocal" podStartSLOduration=1.84295828 podStartE2EDuration="1.84295828s" podCreationTimestamp="2024-12-13 13:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:43:03.841544438 +0000 UTC m=+1.314453992" watchObservedRunningTime="2024-12-13 13:43:03.84295828 +0000 UTC m=+1.315867824" Dec 13 13:43:03.843436 kubelet[2665]: I1213 13:43:03.843137 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-0-0-c-ef2c5deb25.novalocal" podStartSLOduration=1.843129902 podStartE2EDuration="1.843129902s" podCreationTimestamp="2024-12-13 13:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:43:03.830487734 +0000 UTC m=+1.303397278" watchObservedRunningTime="2024-12-13 13:43:03.843129902 +0000 UTC m=+1.316039466" Dec 13 13:43:04.022255 sudo[1648]: pam_unix(sudo:session): session closed for user root Dec 13 13:43:04.233435 sshd[1647]: Connection closed by 172.24.4.1 port 60714 Dec 13 13:43:04.235991 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Dec 13 13:43:04.243373 systemd[1]: sshd@6-172.24.4.155:22-172.24.4.1:60714.service: Deactivated successfully. Dec 13 13:43:04.248677 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:43:04.249267 systemd[1]: session-9.scope: Consumed 7.449s CPU time, 186.9M memory peak, 0B memory swap peak. Dec 13 13:43:04.253040 systemd-logind[1435]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:43:04.255662 systemd-logind[1435]: Removed session 9. Dec 13 13:43:15.085258 kubelet[2665]: I1213 13:43:15.085210 2665 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:43:15.086209 containerd[1447]: time="2024-12-13T13:43:15.086051518Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:43:15.086445 kubelet[2665]: I1213 13:43:15.086309 2665 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:43:15.855943 kubelet[2665]: I1213 13:43:15.855897 2665 topology_manager.go:215] "Topology Admit Handler" podUID="e3bc7edf-4811-4023-94b6-5538f1c5de34" podNamespace="kube-flannel" podName="kube-flannel-ds-jc9ff" Dec 13 13:43:15.860538 kubelet[2665]: I1213 13:43:15.858992 2665 topology_manager.go:215] "Topology Admit Handler" podUID="a3f32d86-56ec-4fd4-a92d-26049879c38a" podNamespace="kube-system" podName="kube-proxy-bc69v" Dec 13 13:43:15.867117 systemd[1]: Created slice kubepods-burstable-pode3bc7edf_4811_4023_94b6_5538f1c5de34.slice - libcontainer container kubepods-burstable-pode3bc7edf_4811_4023_94b6_5538f1c5de34.slice. Dec 13 13:43:15.878239 systemd[1]: Created slice kubepods-besteffort-poda3f32d86_56ec_4fd4_a92d_26049879c38a.slice - libcontainer container kubepods-besteffort-poda3f32d86_56ec_4fd4_a92d_26049879c38a.slice. Dec 13 13:43:15.882050 kubelet[2665]: W1213 13:43:15.881915 2665 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186-0-0-c-ef2c5deb25.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-0-0-c-ef2c5deb25.novalocal' and this object Dec 13 13:43:15.882050 kubelet[2665]: E1213 13:43:15.881954 2665 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186-0-0-c-ef2c5deb25.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-0-0-c-ef2c5deb25.novalocal' and this object Dec 13 13:43:15.913563 kubelet[2665]: I1213 13:43:15.913506 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/e3bc7edf-4811-4023-94b6-5538f1c5de34-cni\") pod \"kube-flannel-ds-jc9ff\" (UID: \"e3bc7edf-4811-4023-94b6-5538f1c5de34\") " pod="kube-flannel/kube-flannel-ds-jc9ff" Dec 13 13:43:15.913563 kubelet[2665]: I1213 13:43:15.913563 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/e3bc7edf-4811-4023-94b6-5538f1c5de34-flannel-cfg\") pod \"kube-flannel-ds-jc9ff\" (UID: \"e3bc7edf-4811-4023-94b6-5538f1c5de34\") " pod="kube-flannel/kube-flannel-ds-jc9ff" Dec 13 13:43:15.913789 kubelet[2665]: I1213 13:43:15.913585 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a3f32d86-56ec-4fd4-a92d-26049879c38a-kube-proxy\") pod \"kube-proxy-bc69v\" (UID: \"a3f32d86-56ec-4fd4-a92d-26049879c38a\") " pod="kube-system/kube-proxy-bc69v" Dec 13 13:43:15.913789 kubelet[2665]: I1213 13:43:15.913607 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3f32d86-56ec-4fd4-a92d-26049879c38a-xtables-lock\") pod \"kube-proxy-bc69v\" (UID: \"a3f32d86-56ec-4fd4-a92d-26049879c38a\") " pod="kube-system/kube-proxy-bc69v" Dec 13 13:43:15.913789 kubelet[2665]: I1213 13:43:15.913629 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3f32d86-56ec-4fd4-a92d-26049879c38a-lib-modules\") pod \"kube-proxy-bc69v\" (UID: \"a3f32d86-56ec-4fd4-a92d-26049879c38a\") " pod="kube-system/kube-proxy-bc69v" Dec 13 13:43:15.913789 kubelet[2665]: I1213 13:43:15.913651 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t62p9\" (UniqueName: \"kubernetes.io/projected/a3f32d86-56ec-4fd4-a92d-26049879c38a-kube-api-access-t62p9\") pod \"kube-proxy-bc69v\" (UID: \"a3f32d86-56ec-4fd4-a92d-26049879c38a\") " pod="kube-system/kube-proxy-bc69v" Dec 13 13:43:15.913789 kubelet[2665]: I1213 13:43:15.913694 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e3bc7edf-4811-4023-94b6-5538f1c5de34-run\") pod \"kube-flannel-ds-jc9ff\" (UID: \"e3bc7edf-4811-4023-94b6-5538f1c5de34\") " pod="kube-flannel/kube-flannel-ds-jc9ff" Dec 13 13:43:15.913956 kubelet[2665]: I1213 13:43:15.913715 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rbln\" (UniqueName: \"kubernetes.io/projected/e3bc7edf-4811-4023-94b6-5538f1c5de34-kube-api-access-8rbln\") pod \"kube-flannel-ds-jc9ff\" (UID: \"e3bc7edf-4811-4023-94b6-5538f1c5de34\") " pod="kube-flannel/kube-flannel-ds-jc9ff" Dec 13 13:43:15.913956 kubelet[2665]: I1213 13:43:15.913735 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/e3bc7edf-4811-4023-94b6-5538f1c5de34-cni-plugin\") pod \"kube-flannel-ds-jc9ff\" (UID: \"e3bc7edf-4811-4023-94b6-5538f1c5de34\") " pod="kube-flannel/kube-flannel-ds-jc9ff" Dec 13 13:43:15.913956 kubelet[2665]: I1213 13:43:15.913755 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3bc7edf-4811-4023-94b6-5538f1c5de34-xtables-lock\") pod \"kube-flannel-ds-jc9ff\" (UID: \"e3bc7edf-4811-4023-94b6-5538f1c5de34\") " pod="kube-flannel/kube-flannel-ds-jc9ff" Dec 13 13:43:16.025544 kubelet[2665]: E1213 13:43:16.024291 2665 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:43:16.025544 kubelet[2665]: E1213 13:43:16.024332 2665 projected.go:200] Error preparing data for projected volume kube-api-access-t62p9 for pod kube-system/kube-proxy-bc69v: configmap "kube-root-ca.crt" not found Dec 13 13:43:16.025544 kubelet[2665]: E1213 13:43:16.024408 2665 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3f32d86-56ec-4fd4-a92d-26049879c38a-kube-api-access-t62p9 podName:a3f32d86-56ec-4fd4-a92d-26049879c38a nodeName:}" failed. No retries permitted until 2024-12-13 13:43:16.524384296 +0000 UTC m=+13.997293850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t62p9" (UniqueName: "kubernetes.io/projected/a3f32d86-56ec-4fd4-a92d-26049879c38a-kube-api-access-t62p9") pod "kube-proxy-bc69v" (UID: "a3f32d86-56ec-4fd4-a92d-26049879c38a") : configmap "kube-root-ca.crt" not found Dec 13 13:43:16.032884 kubelet[2665]: E1213 13:43:16.032575 2665 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:43:16.032884 kubelet[2665]: E1213 13:43:16.032606 2665 projected.go:200] Error preparing data for projected volume kube-api-access-8rbln for pod kube-flannel/kube-flannel-ds-jc9ff: configmap "kube-root-ca.crt" not found Dec 13 13:43:16.032884 kubelet[2665]: E1213 13:43:16.032666 2665 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3bc7edf-4811-4023-94b6-5538f1c5de34-kube-api-access-8rbln podName:e3bc7edf-4811-4023-94b6-5538f1c5de34 nodeName:}" failed. No retries permitted until 2024-12-13 13:43:16.532642786 +0000 UTC m=+14.005552340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8rbln" (UniqueName: "kubernetes.io/projected/e3bc7edf-4811-4023-94b6-5538f1c5de34-kube-api-access-8rbln") pod "kube-flannel-ds-jc9ff" (UID: "e3bc7edf-4811-4023-94b6-5538f1c5de34") : configmap "kube-root-ca.crt" not found Dec 13 13:43:16.775104 containerd[1447]: time="2024-12-13T13:43:16.774166454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jc9ff,Uid:e3bc7edf-4811-4023-94b6-5538f1c5de34,Namespace:kube-flannel,Attempt:0,}" Dec 13 13:43:16.847964 containerd[1447]: time="2024-12-13T13:43:16.847745382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:43:16.847964 containerd[1447]: time="2024-12-13T13:43:16.847874113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:43:16.847964 containerd[1447]: time="2024-12-13T13:43:16.847909209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:43:16.848471 containerd[1447]: time="2024-12-13T13:43:16.848072306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:43:16.892736 systemd[1]: Started cri-containerd-b8d0e0ae33a212ec97b3c322c0a7be4f8d2f4b2f0a94db180509941c7ae07800.scope - libcontainer container b8d0e0ae33a212ec97b3c322c0a7be4f8d2f4b2f0a94db180509941c7ae07800. Dec 13 13:43:16.939059 containerd[1447]: time="2024-12-13T13:43:16.938927971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jc9ff,Uid:e3bc7edf-4811-4023-94b6-5538f1c5de34,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"b8d0e0ae33a212ec97b3c322c0a7be4f8d2f4b2f0a94db180509941c7ae07800\"" Dec 13 13:43:16.942487 containerd[1447]: time="2024-12-13T13:43:16.942449536Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 13:43:17.016969 kubelet[2665]: E1213 13:43:17.016854 2665 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 13 13:43:17.016969 kubelet[2665]: E1213 13:43:17.016958 2665 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3f32d86-56ec-4fd4-a92d-26049879c38a-kube-proxy podName:a3f32d86-56ec-4fd4-a92d-26049879c38a nodeName:}" failed. No retries permitted until 2024-12-13 13:43:17.516934363 +0000 UTC m=+14.989843918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/a3f32d86-56ec-4fd4-a92d-26049879c38a-kube-proxy") pod "kube-proxy-bc69v" (UID: "a3f32d86-56ec-4fd4-a92d-26049879c38a") : failed to sync configmap cache: timed out waiting for the condition Dec 13 13:43:17.690659 containerd[1447]: time="2024-12-13T13:43:17.689946356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bc69v,Uid:a3f32d86-56ec-4fd4-a92d-26049879c38a,Namespace:kube-system,Attempt:0,}" Dec 13 13:43:17.767756 containerd[1447]: time="2024-12-13T13:43:17.767614744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:43:17.768440 containerd[1447]: time="2024-12-13T13:43:17.768367015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:43:17.768702 containerd[1447]: time="2024-12-13T13:43:17.768549968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:43:17.768987 containerd[1447]: time="2024-12-13T13:43:17.768908320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:43:17.790823 systemd[1]: run-containerd-runc-k8s.io-f200097118c44497b0052cd4d050e568e8e37d5b88f3891e8e80445c38e646bd-runc.reeex1.mount: Deactivated successfully. Dec 13 13:43:17.802691 systemd[1]: Started cri-containerd-f200097118c44497b0052cd4d050e568e8e37d5b88f3891e8e80445c38e646bd.scope - libcontainer container f200097118c44497b0052cd4d050e568e8e37d5b88f3891e8e80445c38e646bd. Dec 13 13:43:17.836252 containerd[1447]: time="2024-12-13T13:43:17.836102370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bc69v,Uid:a3f32d86-56ec-4fd4-a92d-26049879c38a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f200097118c44497b0052cd4d050e568e8e37d5b88f3891e8e80445c38e646bd\"" Dec 13 13:43:17.842646 containerd[1447]: time="2024-12-13T13:43:17.842411064Z" level=info msg="CreateContainer within sandbox \"f200097118c44497b0052cd4d050e568e8e37d5b88f3891e8e80445c38e646bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:43:17.865635 containerd[1447]: time="2024-12-13T13:43:17.865493131Z" level=info msg="CreateContainer within sandbox \"f200097118c44497b0052cd4d050e568e8e37d5b88f3891e8e80445c38e646bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"36afe066ace1a719574ba473f6c471029013544dc7b727a5adbaa7d8198ead9d\"" Dec 13 13:43:17.866266 containerd[1447]: time="2024-12-13T13:43:17.866205807Z" level=info msg="StartContainer for \"36afe066ace1a719574ba473f6c471029013544dc7b727a5adbaa7d8198ead9d\"" Dec 13 13:43:17.900687 systemd[1]: Started cri-containerd-36afe066ace1a719574ba473f6c471029013544dc7b727a5adbaa7d8198ead9d.scope - libcontainer container 36afe066ace1a719574ba473f6c471029013544dc7b727a5adbaa7d8198ead9d. Dec 13 13:43:17.941483 containerd[1447]: time="2024-12-13T13:43:17.941339010Z" level=info msg="StartContainer for \"36afe066ace1a719574ba473f6c471029013544dc7b727a5adbaa7d8198ead9d\" returns successfully" Dec 13 13:43:18.724981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount206811365.mount: Deactivated successfully. Dec 13 13:43:18.843782 kubelet[2665]: I1213 13:43:18.843333 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bc69v" podStartSLOduration=3.84250383 podStartE2EDuration="3.84250383s" podCreationTimestamp="2024-12-13 13:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:43:18.839493583 +0000 UTC m=+16.312403177" watchObservedRunningTime="2024-12-13 13:43:18.84250383 +0000 UTC m=+16.315413414" Dec 13 13:43:19.197350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949016501.mount: Deactivated successfully. Dec 13 13:43:19.320294 containerd[1447]: time="2024-12-13T13:43:19.320178753Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:43:19.321912 containerd[1447]: time="2024-12-13T13:43:19.321579711Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Dec 13 13:43:19.324124 containerd[1447]: time="2024-12-13T13:43:19.324064501Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:43:19.331628 containerd[1447]: time="2024-12-13T13:43:19.330991264Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:43:19.332865 containerd[1447]: time="2024-12-13T13:43:19.332828440Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.390316467s" Dec 13 13:43:19.332992 containerd[1447]: time="2024-12-13T13:43:19.332973723Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 13:43:19.337700 containerd[1447]: time="2024-12-13T13:43:19.337022146Z" level=info msg="CreateContainer within sandbox \"b8d0e0ae33a212ec97b3c322c0a7be4f8d2f4b2f0a94db180509941c7ae07800\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 13:43:19.467697 containerd[1447]: time="2024-12-13T13:43:19.467451257Z" level=info msg="CreateContainer within sandbox \"b8d0e0ae33a212ec97b3c322c0a7be4f8d2f4b2f0a94db180509941c7ae07800\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"0940a0922cc18d013195c1bf28331e02bbfa124f2bbca83dfcf73c1230078b2d\"" Dec 13 13:43:19.470511 containerd[1447]: time="2024-12-13T13:43:19.469990229Z" level=info msg="StartContainer for \"0940a0922cc18d013195c1bf28331e02bbfa124f2bbca83dfcf73c1230078b2d\"" Dec 13 13:43:19.531181 systemd[1]: Started cri-containerd-0940a0922cc18d013195c1bf28331e02bbfa124f2bbca83dfcf73c1230078b2d.scope - libcontainer container 0940a0922cc18d013195c1bf28331e02bbfa124f2bbca83dfcf73c1230078b2d. Dec 13 13:43:19.630696 systemd[1]: cri-containerd-0940a0922cc18d013195c1bf28331e02bbfa124f2bbca83dfcf73c1230078b2d.scope: Deactivated successfully. Dec 13 13:43:19.651925 containerd[1447]: time="2024-12-13T13:43:19.651835422Z" level=info msg="StartContainer for \"0940a0922cc18d013195c1bf28331e02bbfa124f2bbca83dfcf73c1230078b2d\" returns successfully" Dec 13 13:43:19.727670 containerd[1447]: time="2024-12-13T13:43:19.727557888Z" level=info msg="shim disconnected" id=0940a0922cc18d013195c1bf28331e02bbfa124f2bbca83dfcf73c1230078b2d namespace=k8s.io Dec 13 13:43:19.728449 containerd[1447]: time="2024-12-13T13:43:19.728104713Z" level=warning msg="cleaning up after shim disconnected" id=0940a0922cc18d013195c1bf28331e02bbfa124f2bbca83dfcf73c1230078b2d namespace=k8s.io Dec 13 13:43:19.728449 containerd[1447]: time="2024-12-13T13:43:19.728172481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:43:19.827133 containerd[1447]: time="2024-12-13T13:43:19.825070565Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 13:43:22.071626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683312217.mount: Deactivated successfully. Dec 13 13:43:23.057487 containerd[1447]: time="2024-12-13T13:43:23.057441744Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:43:23.059330 containerd[1447]: time="2024-12-13T13:43:23.059297074Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Dec 13 13:43:23.060799 containerd[1447]: time="2024-12-13T13:43:23.060736334Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:43:23.064883 containerd[1447]: time="2024-12-13T13:43:23.064383885Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:43:23.066123 containerd[1447]: time="2024-12-13T13:43:23.065965983Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.23928124s" Dec 13 13:43:23.066123 containerd[1447]: time="2024-12-13T13:43:23.066002962Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 13:43:23.070468 containerd[1447]: time="2024-12-13T13:43:23.070417292Z" level=info msg="CreateContainer within sandbox \"b8d0e0ae33a212ec97b3c322c0a7be4f8d2f4b2f0a94db180509941c7ae07800\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:43:23.089460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount503189152.mount: Deactivated successfully. Dec 13 13:43:23.095035 containerd[1447]: time="2024-12-13T13:43:23.093855084Z" level=info msg="CreateContainer within sandbox \"b8d0e0ae33a212ec97b3c322c0a7be4f8d2f4b2f0a94db180509941c7ae07800\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"548d9e92adc0042ec843b7aac741dade4ea1518bf9fea07d30516ea6a4a45390\"" Dec 13 13:43:23.104863 containerd[1447]: time="2024-12-13T13:43:23.102875182Z" level=info msg="StartContainer for \"548d9e92adc0042ec843b7aac741dade4ea1518bf9fea07d30516ea6a4a45390\"" Dec 13 13:43:23.156813 systemd[1]: Started cri-containerd-548d9e92adc0042ec843b7aac741dade4ea1518bf9fea07d30516ea6a4a45390.scope - libcontainer container 548d9e92adc0042ec843b7aac741dade4ea1518bf9fea07d30516ea6a4a45390. Dec 13 13:43:23.185782 systemd[1]: cri-containerd-548d9e92adc0042ec843b7aac741dade4ea1518bf9fea07d30516ea6a4a45390.scope: Deactivated successfully. Dec 13 13:43:23.187559 containerd[1447]: time="2024-12-13T13:43:23.187273998Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3bc7edf_4811_4023_94b6_5538f1c5de34.slice/cri-containerd-548d9e92adc0042ec843b7aac741dade4ea1518bf9fea07d30516ea6a4a45390.scope/cgroup.events\": no such file or directory" Dec 13 13:43:23.191433 containerd[1447]: time="2024-12-13T13:43:23.191127696Z" level=info msg="StartContainer for \"548d9e92adc0042ec843b7aac741dade4ea1518bf9fea07d30516ea6a4a45390\" returns successfully" Dec 13 13:43:23.260280 kubelet[2665]: I1213 13:43:23.260202 2665 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:43:23.797125 containerd[1447]: time="2024-12-13T13:43:23.796967417Z" level=info msg="shim disconnected" id=548d9e92adc0042ec843b7aac741dade4ea1518bf9fea07d30516ea6a4a45390 namespace=k8s.io Dec 13 13:43:23.797125 containerd[1447]: time="2024-12-13T13:43:23.797097290Z" level=warning msg="cleaning up after shim disconnected" id=548d9e92adc0042ec843b7aac741dade4ea1518bf9fea07d30516ea6a4a45390 namespace=k8s.io Dec 13 13:43:23.797468 containerd[1447]: time="2024-12-13T13:43:23.797133128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:43:23.808572 kubelet[2665]: I1213 13:43:23.808079 2665 topology_manager.go:215] "Topology Admit Handler" podUID="94af2596-dc85-49c4-9c21-b29588fbaff0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hkv62" Dec 13 13:43:23.808572 kubelet[2665]: I1213 13:43:23.808469 2665 topology_manager.go:215] "Topology Admit Handler" podUID="fe24d0b4-c6ad-4193-8b04-d3444aa00a66" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9mkhn" Dec 13 13:43:23.846425 systemd[1]: Created slice kubepods-burstable-podfe24d0b4_c6ad_4193_8b04_d3444aa00a66.slice - libcontainer container kubepods-burstable-podfe24d0b4_c6ad_4193_8b04_d3444aa00a66.slice. Dec 13 13:43:23.872861 systemd[1]: Created slice kubepods-burstable-pod94af2596_dc85_49c4_9c21_b29588fbaff0.slice - libcontainer container kubepods-burstable-pod94af2596_dc85_49c4_9c21_b29588fbaff0.slice. Dec 13 13:43:23.898021 kubelet[2665]: I1213 13:43:23.897972 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94af2596-dc85-49c4-9c21-b29588fbaff0-config-volume\") pod \"coredns-7db6d8ff4d-hkv62\" (UID: \"94af2596-dc85-49c4-9c21-b29588fbaff0\") " pod="kube-system/coredns-7db6d8ff4d-hkv62" Dec 13 13:43:23.898169 kubelet[2665]: I1213 13:43:23.898044 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b4p4\" (UniqueName: \"kubernetes.io/projected/94af2596-dc85-49c4-9c21-b29588fbaff0-kube-api-access-9b4p4\") pod \"coredns-7db6d8ff4d-hkv62\" (UID: \"94af2596-dc85-49c4-9c21-b29588fbaff0\") " pod="kube-system/coredns-7db6d8ff4d-hkv62" Dec 13 13:43:23.898169 kubelet[2665]: I1213 13:43:23.898077 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe24d0b4-c6ad-4193-8b04-d3444aa00a66-config-volume\") pod \"coredns-7db6d8ff4d-9mkhn\" (UID: \"fe24d0b4-c6ad-4193-8b04-d3444aa00a66\") " pod="kube-system/coredns-7db6d8ff4d-9mkhn" Dec 13 13:43:23.898169 kubelet[2665]: I1213 13:43:23.898107 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl5hp\" (UniqueName: \"kubernetes.io/projected/fe24d0b4-c6ad-4193-8b04-d3444aa00a66-kube-api-access-bl5hp\") pod \"coredns-7db6d8ff4d-9mkhn\" (UID: \"fe24d0b4-c6ad-4193-8b04-d3444aa00a66\") " pod="kube-system/coredns-7db6d8ff4d-9mkhn" Dec 13 13:43:24.095450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-548d9e92adc0042ec843b7aac741dade4ea1518bf9fea07d30516ea6a4a45390-rootfs.mount: Deactivated successfully. Dec 13 13:43:24.171156 containerd[1447]: time="2024-12-13T13:43:24.170375129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9mkhn,Uid:fe24d0b4-c6ad-4193-8b04-d3444aa00a66,Namespace:kube-system,Attempt:0,}" Dec 13 13:43:24.184396 containerd[1447]: time="2024-12-13T13:43:24.183813035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hkv62,Uid:94af2596-dc85-49c4-9c21-b29588fbaff0,Namespace:kube-system,Attempt:0,}" Dec 13 13:43:24.302309 containerd[1447]: time="2024-12-13T13:43:24.302231148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hkv62,Uid:94af2596-dc85-49c4-9c21-b29588fbaff0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d9a7e71e41047e0bb4bf8cbca9f1e2623a61be3bfaa8f5efbd968a3688e5051c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:43:24.302740 kubelet[2665]: E1213 13:43:24.302693 2665 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a7e71e41047e0bb4bf8cbca9f1e2623a61be3bfaa8f5efbd968a3688e5051c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:43:24.303397 kubelet[2665]: E1213 13:43:24.303122 2665 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a7e71e41047e0bb4bf8cbca9f1e2623a61be3bfaa8f5efbd968a3688e5051c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-hkv62" Dec 13 13:43:24.303397 kubelet[2665]: E1213 13:43:24.303151 2665 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a7e71e41047e0bb4bf8cbca9f1e2623a61be3bfaa8f5efbd968a3688e5051c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-hkv62" Dec 13 13:43:24.303397 kubelet[2665]: E1213 13:43:24.303202 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hkv62_kube-system(94af2596-dc85-49c4-9c21-b29588fbaff0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hkv62_kube-system(94af2596-dc85-49c4-9c21-b29588fbaff0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9a7e71e41047e0bb4bf8cbca9f1e2623a61be3bfaa8f5efbd968a3688e5051c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-hkv62" podUID="94af2596-dc85-49c4-9c21-b29588fbaff0" Dec 13 13:43:24.308378 containerd[1447]: time="2024-12-13T13:43:24.308270696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9mkhn,Uid:fe24d0b4-c6ad-4193-8b04-d3444aa00a66,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a488cc05c4fa5d77df354fbf83b7932b6d01d8086af61d3cbf25291e60869734\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:43:24.308734 kubelet[2665]: E1213 13:43:24.308569 2665 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a488cc05c4fa5d77df354fbf83b7932b6d01d8086af61d3cbf25291e60869734\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:43:24.308734 kubelet[2665]: E1213 13:43:24.308613 2665 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a488cc05c4fa5d77df354fbf83b7932b6d01d8086af61d3cbf25291e60869734\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-9mkhn" Dec 13 13:43:24.308734 kubelet[2665]: E1213 13:43:24.308631 2665 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a488cc05c4fa5d77df354fbf83b7932b6d01d8086af61d3cbf25291e60869734\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-9mkhn" Dec 13 13:43:24.308734 kubelet[2665]: E1213 13:43:24.308682 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9mkhn_kube-system(fe24d0b4-c6ad-4193-8b04-d3444aa00a66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9mkhn_kube-system(fe24d0b4-c6ad-4193-8b04-d3444aa00a66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a488cc05c4fa5d77df354fbf83b7932b6d01d8086af61d3cbf25291e60869734\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-9mkhn" podUID="fe24d0b4-c6ad-4193-8b04-d3444aa00a66" Dec 13 13:43:24.904445 containerd[1447]: time="2024-12-13T13:43:24.903954892Z" level=info msg="CreateContainer within sandbox \"b8d0e0ae33a212ec97b3c322c0a7be4f8d2f4b2f0a94db180509941c7ae07800\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 13:43:24.937501 containerd[1447]: time="2024-12-13T13:43:24.937401457Z" level=info msg="CreateContainer within sandbox \"b8d0e0ae33a212ec97b3c322c0a7be4f8d2f4b2f0a94db180509941c7ae07800\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"984a3549a1c4772fb666d3d676f9dfa912dd19493f8a3dab3515ddd4b2f60ef2\"" Dec 13 13:43:24.940159 containerd[1447]: time="2024-12-13T13:43:24.940090931Z" level=info msg="StartContainer for \"984a3549a1c4772fb666d3d676f9dfa912dd19493f8a3dab3515ddd4b2f60ef2\"" Dec 13 13:43:24.999905 systemd[1]: Started cri-containerd-984a3549a1c4772fb666d3d676f9dfa912dd19493f8a3dab3515ddd4b2f60ef2.scope - libcontainer container 984a3549a1c4772fb666d3d676f9dfa912dd19493f8a3dab3515ddd4b2f60ef2. Dec 13 13:43:25.047634 containerd[1447]: time="2024-12-13T13:43:25.047534760Z" level=info msg="StartContainer for \"984a3549a1c4772fb666d3d676f9dfa912dd19493f8a3dab3515ddd4b2f60ef2\" returns successfully" Dec 13 13:43:25.086796 systemd[1]: run-netns-cni\x2db2fc0d4a\x2d8b69\x2d3bb9\x2db536\x2dedc7a25b9330.mount: Deactivated successfully. Dec 13 13:43:25.086904 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9a7e71e41047e0bb4bf8cbca9f1e2623a61be3bfaa8f5efbd968a3688e5051c-shm.mount: Deactivated successfully. Dec 13 13:43:25.086984 systemd[1]: run-netns-cni\x2d3c0c5b6e\x2d33e0\x2d034d\x2de364\x2d5a9ba3da0ae6.mount: Deactivated successfully. Dec 13 13:43:25.087048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a488cc05c4fa5d77df354fbf83b7932b6d01d8086af61d3cbf25291e60869734-shm.mount: Deactivated successfully. Dec 13 13:43:25.933722 kubelet[2665]: I1213 13:43:25.932301 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-jc9ff" podStartSLOduration=4.806013326 podStartE2EDuration="10.932266926s" podCreationTimestamp="2024-12-13 13:43:15 +0000 UTC" firstStartedPulling="2024-12-13 13:43:16.940995189 +0000 UTC m=+14.413904733" lastFinishedPulling="2024-12-13 13:43:23.067248789 +0000 UTC m=+20.540158333" observedRunningTime="2024-12-13 13:43:25.932046302 +0000 UTC m=+23.404955906" watchObservedRunningTime="2024-12-13 13:43:25.932266926 +0000 UTC m=+23.405176520" Dec 13 13:43:26.166296 systemd-networkd[1357]: flannel.1: Link UP Dec 13 13:43:26.166306 systemd-networkd[1357]: flannel.1: Gained carrier Dec 13 13:43:27.789857 systemd-networkd[1357]: flannel.1: Gained IPv6LL Dec 13 13:43:35.737706 containerd[1447]: time="2024-12-13T13:43:35.737551865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hkv62,Uid:94af2596-dc85-49c4-9c21-b29588fbaff0,Namespace:kube-system,Attempt:0,}" Dec 13 13:43:35.824318 systemd-networkd[1357]: cni0: Link UP Dec 13 13:43:35.824344 systemd-networkd[1357]: cni0: Gained carrier Dec 13 13:43:35.833313 systemd-networkd[1357]: cni0: Lost carrier Dec 13 13:43:35.854235 systemd-networkd[1357]: vethd7270f23: Link UP Dec 13 13:43:35.866246 kernel: cni0: port 1(vethd7270f23) entered blocking state Dec 13 13:43:35.866565 kernel: cni0: port 1(vethd7270f23) entered disabled state Dec 13 13:43:35.875564 kernel: vethd7270f23: entered allmulticast mode Dec 13 13:43:35.878293 kernel: vethd7270f23: entered promiscuous mode Dec 13 13:43:35.878424 kernel: cni0: port 1(vethd7270f23) entered blocking state Dec 13 13:43:35.878459 kernel: cni0: port 1(vethd7270f23) entered forwarding state Dec 13 13:43:35.880560 kernel: cni0: port 1(vethd7270f23) entered disabled state Dec 13 13:43:35.885620 kernel: cni0: port 1(vethd7270f23) entered blocking state Dec 13 13:43:35.885724 kernel: cni0: port 1(vethd7270f23) entered forwarding state Dec 13 13:43:35.885292 systemd-networkd[1357]: vethd7270f23: Gained carrier Dec 13 13:43:35.886091 systemd-networkd[1357]: cni0: Gained carrier Dec 13 13:43:35.888804 containerd[1447]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009c8e8), "name":"cbr0", "type":"bridge"} Dec 13 13:43:35.888804 containerd[1447]: delegateAdd: netconf sent to delegate plugin: Dec 13 13:43:35.913038 containerd[1447]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T13:43:35.912620140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:43:35.913038 containerd[1447]: time="2024-12-13T13:43:35.912719486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:43:35.913038 containerd[1447]: time="2024-12-13T13:43:35.912740155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:43:35.913038 containerd[1447]: time="2024-12-13T13:43:35.912855351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:43:35.943697 systemd[1]: Started cri-containerd-0432609908dd9bee9df6a4ed6400b055b4995a43574e32132ac24d47839c9f06.scope - libcontainer container 0432609908dd9bee9df6a4ed6400b055b4995a43574e32132ac24d47839c9f06. Dec 13 13:43:35.988555 containerd[1447]: time="2024-12-13T13:43:35.988393699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hkv62,Uid:94af2596-dc85-49c4-9c21-b29588fbaff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0432609908dd9bee9df6a4ed6400b055b4995a43574e32132ac24d47839c9f06\"" Dec 13 13:43:36.018049 containerd[1447]: time="2024-12-13T13:43:36.017985051Z" level=info msg="CreateContainer within sandbox \"0432609908dd9bee9df6a4ed6400b055b4995a43574e32132ac24d47839c9f06\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:43:36.057356 containerd[1447]: time="2024-12-13T13:43:36.056897664Z" level=info msg="CreateContainer within sandbox \"0432609908dd9bee9df6a4ed6400b055b4995a43574e32132ac24d47839c9f06\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"09f07dcfe22b84c5f7d0ace625b6cf27edab4394be5cb6022a1d67fb0400fad0\"" Dec 13 13:43:36.062736 containerd[1447]: time="2024-12-13T13:43:36.061854721Z" level=info msg="StartContainer for \"09f07dcfe22b84c5f7d0ace625b6cf27edab4394be5cb6022a1d67fb0400fad0\"" Dec 13 13:43:36.107691 systemd[1]: Started cri-containerd-09f07dcfe22b84c5f7d0ace625b6cf27edab4394be5cb6022a1d67fb0400fad0.scope - libcontainer container 09f07dcfe22b84c5f7d0ace625b6cf27edab4394be5cb6022a1d67fb0400fad0. Dec 13 13:43:36.164360 containerd[1447]: time="2024-12-13T13:43:36.163854920Z" level=info msg="StartContainer for \"09f07dcfe22b84c5f7d0ace625b6cf27edab4394be5cb6022a1d67fb0400fad0\" returns successfully" Dec 13 13:43:36.760215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206380618.mount: Deactivated successfully. Dec 13 13:43:37.040349 kubelet[2665]: I1213 13:43:37.038853 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hkv62" podStartSLOduration=21.038812032 podStartE2EDuration="21.038812032s" podCreationTimestamp="2024-12-13 13:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:43:37.007388426 +0000 UTC m=+34.480298021" watchObservedRunningTime="2024-12-13 13:43:37.038812032 +0000 UTC m=+34.511721626" Dec 13 13:43:37.134678 systemd-networkd[1357]: vethd7270f23: Gained IPv6LL Dec 13 13:43:37.390148 systemd-networkd[1357]: cni0: Gained IPv6LL Dec 13 13:43:38.739496 containerd[1447]: time="2024-12-13T13:43:38.738755323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9mkhn,Uid:fe24d0b4-c6ad-4193-8b04-d3444aa00a66,Namespace:kube-system,Attempt:0,}" Dec 13 13:43:38.807219 systemd-networkd[1357]: veth87739077: Link UP Dec 13 13:43:38.811937 kernel: cni0: port 2(veth87739077) entered blocking state Dec 13 13:43:38.812064 kernel: cni0: port 2(veth87739077) entered disabled state Dec 13 13:43:38.813654 kernel: veth87739077: entered allmulticast mode Dec 13 13:43:38.815767 kernel: veth87739077: entered promiscuous mode Dec 13 13:43:38.827244 kernel: cni0: port 2(veth87739077) entered blocking state Dec 13 13:43:38.827369 kernel: cni0: port 2(veth87739077) entered forwarding state Dec 13 13:43:38.832790 systemd-networkd[1357]: veth87739077: Gained carrier Dec 13 13:43:38.838378 containerd[1447]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009c8e8), "name":"cbr0", "type":"bridge"} Dec 13 13:43:38.838378 containerd[1447]: delegateAdd: netconf sent to delegate plugin: Dec 13 13:43:38.862097 containerd[1447]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T13:43:38.861991095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:43:38.862265 containerd[1447]: time="2024-12-13T13:43:38.862074753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:43:38.862265 containerd[1447]: time="2024-12-13T13:43:38.862095683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:43:38.862265 containerd[1447]: time="2024-12-13T13:43:38.862201033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:43:38.887081 systemd[1]: run-containerd-runc-k8s.io-e3f5cf86df4c12349f0976bbc5715e8956a3a62ee3b0cb809e07e50f4185bace-runc.uXXMps.mount: Deactivated successfully. Dec 13 13:43:38.900687 systemd[1]: Started cri-containerd-e3f5cf86df4c12349f0976bbc5715e8956a3a62ee3b0cb809e07e50f4185bace.scope - libcontainer container e3f5cf86df4c12349f0976bbc5715e8956a3a62ee3b0cb809e07e50f4185bace. Dec 13 13:43:38.940863 containerd[1447]: time="2024-12-13T13:43:38.940822150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9mkhn,Uid:fe24d0b4-c6ad-4193-8b04-d3444aa00a66,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3f5cf86df4c12349f0976bbc5715e8956a3a62ee3b0cb809e07e50f4185bace\"" Dec 13 13:43:38.944893 containerd[1447]: time="2024-12-13T13:43:38.944859358Z" level=info msg="CreateContainer within sandbox \"e3f5cf86df4c12349f0976bbc5715e8956a3a62ee3b0cb809e07e50f4185bace\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:43:38.962838 containerd[1447]: time="2024-12-13T13:43:38.962785785Z" level=info msg="CreateContainer within sandbox \"e3f5cf86df4c12349f0976bbc5715e8956a3a62ee3b0cb809e07e50f4185bace\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"689706af2acb18dca0dc58eaf53d73d567c85e8110024f5855b6d2a87cd81893\"" Dec 13 13:43:38.963406 containerd[1447]: time="2024-12-13T13:43:38.963373250Z" level=info msg="StartContainer for \"689706af2acb18dca0dc58eaf53d73d567c85e8110024f5855b6d2a87cd81893\"" Dec 13 13:43:38.992676 systemd[1]: Started cri-containerd-689706af2acb18dca0dc58eaf53d73d567c85e8110024f5855b6d2a87cd81893.scope - libcontainer container 689706af2acb18dca0dc58eaf53d73d567c85e8110024f5855b6d2a87cd81893. Dec 13 13:43:39.026906 containerd[1447]: time="2024-12-13T13:43:39.026634923Z" level=info msg="StartContainer for \"689706af2acb18dca0dc58eaf53d73d567c85e8110024f5855b6d2a87cd81893\" returns successfully" Dec 13 13:43:39.885820 systemd-networkd[1357]: veth87739077: Gained IPv6LL Dec 13 13:43:40.002109 kubelet[2665]: I1213 13:43:40.001985 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9mkhn" podStartSLOduration=24.001946723 podStartE2EDuration="24.001946723s" podCreationTimestamp="2024-12-13 13:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:43:39.999274247 +0000 UTC m=+37.472183851" watchObservedRunningTime="2024-12-13 13:43:40.001946723 +0000 UTC m=+37.474856317" Dec 13 13:43:53.608051 systemd[1]: Started sshd@7-172.24.4.155:22-172.24.4.1:47600.service - OpenSSH per-connection server daemon (172.24.4.1:47600). Dec 13 13:43:54.939089 sshd[3620]: Accepted publickey for core from 172.24.4.1 port 47600 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:43:54.942077 sshd-session[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:43:54.958217 systemd-logind[1435]: New session 10 of user core. Dec 13 13:43:54.969857 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:43:55.680683 sshd[3622]: Connection closed by 172.24.4.1 port 47600 Dec 13 13:43:55.681863 sshd-session[3620]: pam_unix(sshd:session): session closed for user core Dec 13 13:43:55.687169 systemd[1]: sshd@7-172.24.4.155:22-172.24.4.1:47600.service: Deactivated successfully. Dec 13 13:43:55.692345 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:43:55.696066 systemd-logind[1435]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:43:55.698503 systemd-logind[1435]: Removed session 10. Dec 13 13:44:00.702407 systemd[1]: Started sshd@8-172.24.4.155:22-172.24.4.1:42924.service - OpenSSH per-connection server daemon (172.24.4.1:42924). Dec 13 13:44:02.020166 sshd[3656]: Accepted publickey for core from 172.24.4.1 port 42924 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:44:02.024187 sshd-session[3656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:44:02.035244 systemd-logind[1435]: New session 11 of user core. Dec 13 13:44:02.045158 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:44:02.722969 sshd[3679]: Connection closed by 172.24.4.1 port 42924 Dec 13 13:44:02.722816 sshd-session[3656]: pam_unix(sshd:session): session closed for user core Dec 13 13:44:02.734813 systemd-logind[1435]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:44:02.736035 systemd[1]: sshd@8-172.24.4.155:22-172.24.4.1:42924.service: Deactivated successfully. Dec 13 13:44:02.743168 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:44:02.748937 systemd-logind[1435]: Removed session 11. Dec 13 13:44:07.749121 systemd[1]: Started sshd@9-172.24.4.155:22-172.24.4.1:40330.service - OpenSSH per-connection server daemon (172.24.4.1:40330). Dec 13 13:44:09.105853 sshd[3715]: Accepted publickey for core from 172.24.4.1 port 40330 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:44:09.109330 sshd-session[3715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:44:09.122970 systemd-logind[1435]: New session 12 of user core. Dec 13 13:44:09.128887 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:44:09.885759 sshd[3718]: Connection closed by 172.24.4.1 port 40330 Dec 13 13:44:09.888570 sshd-session[3715]: pam_unix(sshd:session): session closed for user core Dec 13 13:44:09.910185 systemd[1]: sshd@9-172.24.4.155:22-172.24.4.1:40330.service: Deactivated successfully. Dec 13 13:44:09.919308 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:44:09.922367 systemd-logind[1435]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:44:09.933106 systemd[1]: Started sshd@10-172.24.4.155:22-172.24.4.1:40334.service - OpenSSH per-connection server daemon (172.24.4.1:40334). Dec 13 13:44:09.936507 systemd-logind[1435]: Removed session 12. Dec 13 13:44:11.152242 sshd[3730]: Accepted publickey for core from 172.24.4.1 port 40334 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:44:11.154986 sshd-session[3730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:44:11.166126 systemd-logind[1435]: New session 13 of user core. Dec 13 13:44:11.176996 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:44:11.897304 sshd[3732]: Connection closed by 172.24.4.1 port 40334 Dec 13 13:44:11.901444 sshd-session[3730]: pam_unix(sshd:session): session closed for user core Dec 13 13:44:11.913181 systemd[1]: sshd@10-172.24.4.155:22-172.24.4.1:40334.service: Deactivated successfully. Dec 13 13:44:11.918634 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:44:11.921130 systemd-logind[1435]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:44:11.932187 systemd[1]: Started sshd@11-172.24.4.155:22-172.24.4.1:40340.service - OpenSSH per-connection server daemon (172.24.4.1:40340). Dec 13 13:44:11.937455 systemd-logind[1435]: Removed session 13. Dec 13 13:44:13.141596 sshd[3762]: Accepted publickey for core from 172.24.4.1 port 40340 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:44:13.145040 sshd-session[3762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:44:13.155848 systemd-logind[1435]: New session 14 of user core. Dec 13 13:44:13.171051 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:44:13.973490 sshd[3764]: Connection closed by 172.24.4.1 port 40340 Dec 13 13:44:13.973833 sshd-session[3762]: pam_unix(sshd:session): session closed for user core Dec 13 13:44:13.980426 systemd-logind[1435]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:44:13.982073 systemd[1]: sshd@11-172.24.4.155:22-172.24.4.1:40340.service: Deactivated successfully. Dec 13 13:44:13.987217 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:44:13.989730 systemd-logind[1435]: Removed session 14. Dec 13 13:44:18.999188 systemd[1]: Started sshd@12-172.24.4.155:22-172.24.4.1:38802.service - OpenSSH per-connection server daemon (172.24.4.1:38802). Dec 13 13:44:20.214017 sshd[3799]: Accepted publickey for core from 172.24.4.1 port 38802 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:44:20.216915 sshd-session[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:44:20.227622 systemd-logind[1435]: New session 15 of user core. Dec 13 13:44:20.232896 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:44:20.800082 sshd[3801]: Connection closed by 172.24.4.1 port 38802 Dec 13 13:44:20.801946 sshd-session[3799]: pam_unix(sshd:session): session closed for user core Dec 13 13:44:20.814492 systemd[1]: sshd@12-172.24.4.155:22-172.24.4.1:38802.service: Deactivated successfully. Dec 13 13:44:20.818012 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:44:20.822917 systemd-logind[1435]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:44:20.828123 systemd[1]: Started sshd@13-172.24.4.155:22-172.24.4.1:38810.service - OpenSSH per-connection server daemon (172.24.4.1:38810). Dec 13 13:44:20.835456 systemd-logind[1435]: Removed session 15. Dec 13 13:44:22.076400 sshd[3812]: Accepted publickey for core from 172.24.4.1 port 38810 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:44:22.079215 sshd-session[3812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:44:22.088753 systemd-logind[1435]: New session 16 of user core. Dec 13 13:44:22.100921 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:44:23.776143 sshd[3835]: Connection closed by 172.24.4.1 port 38810 Dec 13 13:44:23.775905 sshd-session[3812]: pam_unix(sshd:session): session closed for user core Dec 13 13:44:23.789028 systemd[1]: sshd@13-172.24.4.155:22-172.24.4.1:38810.service: Deactivated successfully. Dec 13 13:44:23.793209 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:44:23.796595 systemd-logind[1435]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:44:23.804162 systemd[1]: Started sshd@14-172.24.4.155:22-172.24.4.1:38814.service - OpenSSH per-connection server daemon (172.24.4.1:38814). Dec 13 13:44:23.808464 systemd-logind[1435]: Removed session 16. Dec 13 13:44:25.008002 sshd[3844]: Accepted publickey for core from 172.24.4.1 port 38814 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:44:25.010815 sshd-session[3844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:44:25.022010 systemd-logind[1435]: New session 17 of user core. Dec 13 13:44:25.027805 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:44:27.490911 sshd[3846]: Connection closed by 172.24.4.1 port 38814 Dec 13 13:44:27.492781 sshd-session[3844]: pam_unix(sshd:session): session closed for user core Dec 13 13:44:27.501975 systemd[1]: sshd@14-172.24.4.155:22-172.24.4.1:38814.service: Deactivated successfully. Dec 13 13:44:27.504866 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:44:27.506573 systemd-logind[1435]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:44:27.514987 systemd[1]: Started sshd@15-172.24.4.155:22-172.24.4.1:57118.service - OpenSSH per-connection server daemon (172.24.4.1:57118). Dec 13 13:44:27.518731 systemd-logind[1435]: Removed session 17. Dec 13 13:44:28.706072 sshd[3886]: Accepted publickey for core from 172.24.4.1 port 57118 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:44:28.708786 sshd-session[3886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:44:28.721636 systemd-logind[1435]: New session 18 of user core. Dec 13 13:44:28.727828 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:44:29.800806 sshd[3888]: Connection closed by 172.24.4.1 port 57118 Dec 13 13:44:29.799050 sshd-session[3886]: pam_unix(sshd:session): session closed for user core Dec 13 13:44:29.814132 systemd[1]: sshd@15-172.24.4.155:22-172.24.4.1:57118.service: Deactivated successfully. Dec 13 13:44:29.820030 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:44:29.824859 systemd-logind[1435]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:44:29.831078 systemd[1]: Started sshd@16-172.24.4.155:22-172.24.4.1:57126.service - OpenSSH per-connection server daemon (172.24.4.1:57126). Dec 13 13:44:29.836300 systemd-logind[1435]: Removed session 18. Dec 13 13:44:31.032311 sshd[3897]: Accepted publickey for core from 172.24.4.1 port 57126 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:44:31.035213 sshd-session[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:44:31.047690 systemd-logind[1435]: New session 19 of user core. Dec 13 13:44:31.058816 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:44:31.853607 sshd[3899]: Connection closed by 172.24.4.1 port 57126 Dec 13 13:44:31.854408 sshd-session[3897]: pam_unix(sshd:session): session closed for user core Dec 13 13:44:31.863109 systemd[1]: sshd@16-172.24.4.155:22-172.24.4.1:57126.service: Deactivated successfully. Dec 13 13:44:31.870998 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:44:31.876193 systemd-logind[1435]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:44:31.878949 systemd-logind[1435]: Removed session 19. Dec 13 13:44:36.875080 systemd[1]: Started sshd@17-172.24.4.155:22-172.24.4.1:50346.service - OpenSSH per-connection server daemon (172.24.4.1:50346). Dec 13 13:44:38.072242 sshd[3939]: Accepted publickey for core from 172.24.4.1 port 50346 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:44:38.074980 sshd-session[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:44:38.087670 systemd-logind[1435]: New session 20 of user core. Dec 13 13:44:38.096888 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:44:38.852961 sshd[3956]: Connection closed by 172.24.4.1 port 50346 Dec 13 13:44:38.854172 sshd-session[3939]: pam_unix(sshd:session): session closed for user core Dec 13 13:44:38.862178 systemd[1]: sshd@17-172.24.4.155:22-172.24.4.1:50346.service: Deactivated successfully. Dec 13 13:44:38.871488 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:44:38.876498 systemd-logind[1435]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:44:38.879046 systemd-logind[1435]: Removed session 20. Dec 13 13:44:43.878508 systemd[1]: Started sshd@18-172.24.4.155:22-172.24.4.1:50358.service - OpenSSH per-connection server daemon (172.24.4.1:50358). Dec 13 13:44:45.221017 sshd[3987]: Accepted publickey for core from 172.24.4.1 port 50358 ssh2: RSA SHA256:gMyySNlkobtnegIUOgKiq8X7+FvfBix4+97j05Vtzjs Dec 13 13:44:45.223224 sshd-session[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:44:45.231452 systemd-logind[1435]: New session 21 of user core. Dec 13 13:44:45.238719 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:44:46.269216 sshd[3989]: Connection closed by 172.24.4.1 port 50358 Dec 13 13:44:46.269970 sshd-session[3987]: pam_unix(sshd:session): session closed for user core Dec 13 13:44:46.278705 systemd[1]: sshd@18-172.24.4.155:22-172.24.4.1:50358.service: Deactivated successfully. Dec 13 13:44:46.282969 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:44:46.285867 systemd-logind[1435]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:44:46.288390 systemd-logind[1435]: Removed session 21.