Jan 13 20:31:05.047415 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:31:05.047442 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:31:05.047452 kernel: BIOS-provided physical RAM map: Jan 13 20:31:05.047460 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:31:05.047468 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:31:05.047478 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:31:05.047487 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 13 20:31:05.047495 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 13 20:31:05.049857 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:31:05.049869 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:31:05.049877 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 13 20:31:05.049885 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:31:05.049893 kernel: NX (Execute Disable) protection: active Jan 13 20:31:05.049902 kernel: APIC: Static calls initialized Jan 13 20:31:05.049915 kernel: SMBIOS 3.0.0 present. Jan 13 20:31:05.049923 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 13 20:31:05.049931 kernel: Hypervisor detected: KVM Jan 13 20:31:05.049940 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:31:05.049948 kernel: kvm-clock: using sched offset of 3418048636 cycles Jan 13 20:31:05.049958 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:31:05.049967 kernel: tsc: Detected 1996.249 MHz processor Jan 13 20:31:05.049976 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:31:05.049985 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:31:05.049994 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 13 20:31:05.050003 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:31:05.050012 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:31:05.050020 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 13 20:31:05.050029 kernel: ACPI: Early table checksum verification disabled Jan 13 20:31:05.050040 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 13 20:31:05.050048 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:05.050057 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:05.050066 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:05.050074 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 13 20:31:05.050083 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:05.050091 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:05.050100 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 13 20:31:05.050108 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 13 20:31:05.050119 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 13 20:31:05.050127 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 13 20:31:05.050136 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 13 20:31:05.050148 kernel: No NUMA configuration found Jan 13 20:31:05.050157 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 13 20:31:05.050166 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 13 20:31:05.050175 kernel: Zone ranges: Jan 13 20:31:05.050185 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:31:05.050194 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 20:31:05.050203 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 13 20:31:05.050211 kernel: Movable zone start for each node Jan 13 20:31:05.050220 kernel: Early memory node ranges Jan 13 20:31:05.050229 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:31:05.050237 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 13 20:31:05.050246 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 13 20:31:05.050257 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 13 20:31:05.050266 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:31:05.050274 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:31:05.050283 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 13 20:31:05.050292 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:31:05.050301 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:31:05.050310 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:31:05.050318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:31:05.050327 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:31:05.050338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:31:05.050347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:31:05.050356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:31:05.050364 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:31:05.050373 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:31:05.050382 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:31:05.050390 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 13 20:31:05.050399 kernel: Booting paravirtualized kernel on KVM Jan 13 20:31:05.050408 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:31:05.050420 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:31:05.050428 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:31:05.050437 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:31:05.050446 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:31:05.050454 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 13 20:31:05.050465 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:31:05.050474 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:31:05.050483 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:31:05.050494 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:31:05.050502 kernel: Fallback order for Node 0: 0 Jan 13 20:31:05.050511 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 13 20:31:05.050520 kernel: Policy zone: Normal Jan 13 20:31:05.050528 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:31:05.050537 kernel: software IO TLB: area num 2. Jan 13 20:31:05.050546 kernel: Memory: 3964156K/4193772K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 229356K reserved, 0K cma-reserved) Jan 13 20:31:05.050555 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:31:05.050566 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:31:05.050575 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:31:05.050584 kernel: Dynamic Preempt: voluntary Jan 13 20:31:05.050592 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:31:05.050602 kernel: rcu: RCU event tracing is enabled. Jan 13 20:31:05.050611 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:31:05.050620 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:31:05.050629 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:31:05.050638 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:31:05.050647 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:31:05.050658 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:31:05.050666 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:31:05.050675 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:31:05.050684 kernel: Console: colour VGA+ 80x25 Jan 13 20:31:05.050693 kernel: printk: console [tty0] enabled Jan 13 20:31:05.050701 kernel: printk: console [ttyS0] enabled Jan 13 20:31:05.050710 kernel: ACPI: Core revision 20230628 Jan 13 20:31:05.050719 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:31:05.050728 kernel: x2apic enabled Jan 13 20:31:05.050739 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:31:05.050748 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:31:05.050756 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:31:05.050765 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 13 20:31:05.050774 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 20:31:05.050783 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 20:31:05.050792 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:31:05.050801 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:31:05.050864 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:31:05.050878 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:31:05.050887 kernel: Speculative Store Bypass: Vulnerable Jan 13 20:31:05.050900 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 13 20:31:05.050931 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:31:05.050975 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:31:05.051011 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:31:05.051040 kernel: landlock: Up and running. Jan 13 20:31:05.051062 kernel: SELinux: Initializing. Jan 13 20:31:05.051071 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:31:05.051081 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:31:05.051091 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 13 20:31:05.051100 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:31:05.051112 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:31:05.051121 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:31:05.051130 kernel: Performance Events: AMD PMU driver. Jan 13 20:31:05.051139 kernel: ... version: 0 Jan 13 20:31:05.051150 kernel: ... bit width: 48 Jan 13 20:31:05.051160 kernel: ... generic registers: 4 Jan 13 20:31:05.051169 kernel: ... value mask: 0000ffffffffffff Jan 13 20:31:05.051178 kernel: ... max period: 00007fffffffffff Jan 13 20:31:05.051187 kernel: ... fixed-purpose events: 0 Jan 13 20:31:05.051196 kernel: ... event mask: 000000000000000f Jan 13 20:31:05.051205 kernel: signal: max sigframe size: 1440 Jan 13 20:31:05.051215 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:31:05.051224 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:31:05.051233 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:31:05.051264 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:31:05.051274 kernel: .... node #0, CPUs: #1 Jan 13 20:31:05.051283 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:31:05.051292 kernel: smpboot: Max logical packages: 2 Jan 13 20:31:05.051302 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 13 20:31:05.051311 kernel: devtmpfs: initialized Jan 13 20:31:05.051320 kernel: x86/mm: Memory block size: 128MB Jan 13 20:31:05.051330 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:31:05.051339 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:31:05.051351 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:31:05.051360 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:31:05.051369 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:31:05.051378 kernel: audit: type=2000 audit(1736800263.748:1): state=initialized audit_enabled=0 res=1 Jan 13 20:31:05.051388 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:31:05.051397 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:31:05.051406 kernel: cpuidle: using governor menu Jan 13 20:31:05.051416 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:31:05.051425 kernel: dca service started, version 1.12.1 Jan 13 20:31:05.051436 kernel: PCI: Using configuration type 1 for base access Jan 13 20:31:05.051445 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:31:05.051454 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:31:05.051464 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:31:05.051473 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:31:05.051482 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:31:05.051491 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:31:05.051500 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:31:05.051510 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:31:05.051521 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:31:05.051530 kernel: ACPI: Interpreter enabled Jan 13 20:31:05.051539 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:31:05.051548 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:31:05.051558 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:31:05.051567 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:31:05.051576 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 13 20:31:05.051585 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:31:05.051731 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:31:05.052986 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:31:05.053086 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:31:05.053101 kernel: acpiphp: Slot [3] registered Jan 13 20:31:05.053110 kernel: acpiphp: Slot [4] registered Jan 13 20:31:05.053120 kernel: acpiphp: Slot [5] registered Jan 13 20:31:05.053129 kernel: acpiphp: Slot [6] registered Jan 13 20:31:05.053138 kernel: acpiphp: Slot [7] registered Jan 13 20:31:05.053157 kernel: acpiphp: Slot [8] registered Jan 13 20:31:05.053166 kernel: acpiphp: Slot [9] registered Jan 13 20:31:05.053175 kernel: acpiphp: Slot [10] registered Jan 13 20:31:05.053184 kernel: acpiphp: Slot [11] registered Jan 13 20:31:05.053194 kernel: acpiphp: Slot [12] registered Jan 13 20:31:05.053203 kernel: acpiphp: Slot [13] registered Jan 13 20:31:05.053212 kernel: acpiphp: Slot [14] registered Jan 13 20:31:05.053221 kernel: acpiphp: Slot [15] registered Jan 13 20:31:05.053230 kernel: acpiphp: Slot [16] registered Jan 13 20:31:05.053239 kernel: acpiphp: Slot [17] registered Jan 13 20:31:05.053250 kernel: acpiphp: Slot [18] registered Jan 13 20:31:05.053259 kernel: acpiphp: Slot [19] registered Jan 13 20:31:05.053268 kernel: acpiphp: Slot [20] registered Jan 13 20:31:05.053277 kernel: acpiphp: Slot [21] registered Jan 13 20:31:05.053287 kernel: acpiphp: Slot [22] registered Jan 13 20:31:05.053296 kernel: acpiphp: Slot [23] registered Jan 13 20:31:05.053305 kernel: acpiphp: Slot [24] registered Jan 13 20:31:05.053314 kernel: acpiphp: Slot [25] registered Jan 13 20:31:05.053323 kernel: acpiphp: Slot [26] registered Jan 13 20:31:05.053334 kernel: acpiphp: Slot [27] registered Jan 13 20:31:05.053343 kernel: acpiphp: Slot [28] registered Jan 13 20:31:05.053352 kernel: acpiphp: Slot [29] registered Jan 13 20:31:05.053361 kernel: acpiphp: Slot [30] registered Jan 13 20:31:05.053370 kernel: acpiphp: Slot [31] registered Jan 13 20:31:05.053379 kernel: PCI host bridge to bus 0000:00 Jan 13 20:31:05.053473 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:31:05.053557 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:31:05.053643 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:31:05.053724 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:31:05.053804 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 13 20:31:05.053905 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:31:05.054011 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:31:05.054114 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 20:31:05.054231 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 13 20:31:05.054331 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 13 20:31:05.054421 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 20:31:05.054511 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 20:31:05.054601 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 20:31:05.054690 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 20:31:05.054862 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 20:31:05.055009 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 13 20:31:05.055100 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 13 20:31:05.055199 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 13 20:31:05.055291 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 13 20:31:05.055383 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 13 20:31:05.055473 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 13 20:31:05.055563 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 13 20:31:05.055658 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:31:05.055788 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:31:05.056323 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 13 20:31:05.056431 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 13 20:31:05.056522 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 13 20:31:05.056610 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 13 20:31:05.056708 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:31:05.056843 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:31:05.056943 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 13 20:31:05.057032 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 13 20:31:05.057128 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 13 20:31:05.057217 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 13 20:31:05.057306 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 13 20:31:05.057402 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:31:05.057501 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 13 20:31:05.057588 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 13 20:31:05.057676 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 13 20:31:05.057689 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:31:05.057699 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:31:05.057709 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:31:05.057718 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:31:05.057727 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:31:05.057740 kernel: iommu: Default domain type: Translated Jan 13 20:31:05.057750 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:31:05.059844 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:31:05.059861 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:31:05.059871 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:31:05.059880 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 13 20:31:05.059984 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 13 20:31:05.060075 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 13 20:31:05.060171 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:31:05.060185 kernel: vgaarb: loaded Jan 13 20:31:05.060194 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:31:05.060204 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:31:05.060213 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:31:05.060222 kernel: pnp: PnP ACPI init Jan 13 20:31:05.060313 kernel: pnp 00:03: [dma 2] Jan 13 20:31:05.060328 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:31:05.060338 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:31:05.060351 kernel: NET: Registered PF_INET protocol family Jan 13 20:31:05.060360 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:31:05.060369 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:31:05.060379 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:31:05.060388 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:31:05.060397 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:31:05.060407 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:31:05.060426 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:31:05.060435 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:31:05.060447 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:31:05.060456 kernel: NET: Registered PF_XDP protocol family Jan 13 20:31:05.060538 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:31:05.060617 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:31:05.060695 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:31:05.060773 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 13 20:31:05.060921 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 13 20:31:05.061015 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 13 20:31:05.061110 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:31:05.061123 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:31:05.061133 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 20:31:05.061142 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 13 20:31:05.061152 kernel: Initialise system trusted keyrings Jan 13 20:31:05.061162 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:31:05.061171 kernel: Key type asymmetric registered Jan 13 20:31:05.061180 kernel: Asymmetric key parser 'x509' registered Jan 13 20:31:05.061192 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:31:05.061202 kernel: io scheduler mq-deadline registered Jan 13 20:31:05.061211 kernel: io scheduler kyber registered Jan 13 20:31:05.061220 kernel: io scheduler bfq registered Jan 13 20:31:05.061230 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:31:05.061240 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 13 20:31:05.061249 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 20:31:05.061259 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 20:31:05.061268 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 20:31:05.061278 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:31:05.061289 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:31:05.061298 kernel: random: crng init done Jan 13 20:31:05.061307 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:31:05.061317 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:31:05.061326 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:31:05.061421 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:31:05.061436 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:31:05.061514 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:31:05.061601 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:31:04 UTC (1736800264) Jan 13 20:31:05.061685 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 20:31:05.061698 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:31:05.061707 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:31:05.061717 kernel: Segment Routing with IPv6 Jan 13 20:31:05.061726 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:31:05.061735 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:31:05.061744 kernel: Key type dns_resolver registered Jan 13 20:31:05.061756 kernel: IPI shorthand broadcast: enabled Jan 13 20:31:05.061766 kernel: sched_clock: Marking stable (1004007311, 173792125)->(1213504798, -35705362) Jan 13 20:31:05.061775 kernel: registered taskstats version 1 Jan 13 20:31:05.061785 kernel: Loading compiled-in X.509 certificates Jan 13 20:31:05.061795 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:31:05.061804 kernel: Key type .fscrypt registered Jan 13 20:31:05.063856 kernel: Key type fscrypt-provisioning registered Jan 13 20:31:05.063867 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:31:05.063877 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:31:05.063890 kernel: ima: No architecture policies found Jan 13 20:31:05.063899 kernel: clk: Disabling unused clocks Jan 13 20:31:05.063908 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:31:05.063918 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:31:05.063927 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:31:05.063937 kernel: Run /init as init process Jan 13 20:31:05.063946 kernel: with arguments: Jan 13 20:31:05.063955 kernel: /init Jan 13 20:31:05.063964 kernel: with environment: Jan 13 20:31:05.063975 kernel: HOME=/ Jan 13 20:31:05.063984 kernel: TERM=linux Jan 13 20:31:05.063993 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:31:05.064005 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:31:05.064018 systemd[1]: Detected virtualization kvm. Jan 13 20:31:05.064028 systemd[1]: Detected architecture x86-64. Jan 13 20:31:05.064039 systemd[1]: Running in initrd. Jan 13 20:31:05.064050 systemd[1]: No hostname configured, using default hostname. Jan 13 20:31:05.064060 systemd[1]: Hostname set to . Jan 13 20:31:05.064071 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:31:05.064081 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:31:05.064091 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:31:05.064101 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:31:05.064112 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:31:05.064131 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:31:05.064144 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:31:05.064154 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:31:05.064166 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:31:05.064177 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:31:05.064188 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:31:05.064200 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:31:05.064211 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:31:05.064221 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:31:05.064231 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:31:05.064242 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:31:05.064252 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:31:05.064262 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:31:05.064273 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:31:05.064286 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:31:05.064296 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:31:05.064307 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:31:05.064317 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:31:05.064327 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:31:05.064338 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:31:05.064348 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:31:05.064359 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:31:05.064369 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:31:05.064381 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:31:05.064392 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:31:05.064402 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:05.064442 systemd-journald[185]: Collecting audit messages is disabled. Jan 13 20:31:05.064471 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:31:05.064482 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:31:05.064493 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:31:05.064507 systemd-journald[185]: Journal started Jan 13 20:31:05.064531 systemd-journald[185]: Runtime Journal (/run/log/journal/c181c7a01b954f89806535e1b8d58508) is 8.0M, max 78.3M, 70.3M free. Jan 13 20:31:05.051288 systemd-modules-load[186]: Inserted module 'overlay' Jan 13 20:31:05.073827 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:31:05.087830 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:31:05.089830 kernel: Bridge firewalling registered Jan 13 20:31:05.089840 systemd-modules-load[186]: Inserted module 'br_netfilter' Jan 13 20:31:05.116633 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:31:05.122281 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:31:05.122955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:05.127185 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:31:05.131938 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:31:05.133866 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:31:05.137062 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:31:05.140933 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:31:05.157339 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:31:05.158205 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:31:05.160365 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:31:05.171940 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:31:05.173715 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:31:05.180932 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:31:05.197946 dracut-cmdline[224]: dracut-dracut-053 Jan 13 20:31:05.203650 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:31:05.210082 systemd-resolved[221]: Positive Trust Anchors: Jan 13 20:31:05.210738 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:31:05.210781 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:31:05.216685 systemd-resolved[221]: Defaulting to hostname 'linux'. Jan 13 20:31:05.218584 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:31:05.219202 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:31:05.307913 kernel: SCSI subsystem initialized Jan 13 20:31:05.319916 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:31:05.332989 kernel: iscsi: registered transport (tcp) Jan 13 20:31:05.355161 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:31:05.355226 kernel: QLogic iSCSI HBA Driver Jan 13 20:31:05.411159 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:31:05.416114 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:31:05.451538 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:31:05.451579 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:31:05.452284 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:31:05.497920 kernel: raid6: sse2x4 gen() 12868 MB/s Jan 13 20:31:05.515868 kernel: raid6: sse2x2 gen() 14611 MB/s Jan 13 20:31:05.534176 kernel: raid6: sse2x1 gen() 9852 MB/s Jan 13 20:31:05.534239 kernel: raid6: using algorithm sse2x2 gen() 14611 MB/s Jan 13 20:31:05.553275 kernel: raid6: .... xor() 9112 MB/s, rmw enabled Jan 13 20:31:05.553323 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 20:31:05.606867 kernel: xor: measuring software checksum speed Jan 13 20:31:05.611283 kernel: prefetch64-sse : 6871 MB/sec Jan 13 20:31:05.611345 kernel: generic_sse : 6599 MB/sec Jan 13 20:31:05.614292 kernel: xor: using function: prefetch64-sse (6871 MB/sec) Jan 13 20:31:05.834883 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:31:05.851387 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:31:05.857946 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:31:05.906033 systemd-udevd[406]: Using default interface naming scheme 'v255'. Jan 13 20:31:05.916893 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:31:05.928100 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:31:05.957038 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jan 13 20:31:05.999149 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:31:06.004976 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:31:06.049901 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:31:06.059103 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:31:06.082435 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:31:06.098023 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:31:06.100600 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:31:06.102396 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:31:06.111011 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:31:06.132399 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:31:06.147360 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 13 20:31:06.189177 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 13 20:31:06.189324 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:31:06.189340 kernel: GPT:17805311 != 20971519 Jan 13 20:31:06.189354 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:31:06.189371 kernel: GPT:17805311 != 20971519 Jan 13 20:31:06.189386 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:31:06.189399 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:31:06.159650 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:31:06.190881 kernel: libata version 3.00 loaded. Jan 13 20:31:06.159771 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:31:06.160582 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:31:06.161263 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:31:06.161439 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:06.165238 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:06.173065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:06.198490 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 13 20:31:06.204001 kernel: scsi host0: ata_piix Jan 13 20:31:06.204130 kernel: scsi host1: ata_piix Jan 13 20:31:06.204243 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 13 20:31:06.204265 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 13 20:31:06.235842 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (456) Jan 13 20:31:06.248842 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (450) Jan 13 20:31:06.249295 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:31:06.272634 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:06.282977 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:31:06.288748 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:31:06.293405 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:31:06.293978 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:31:06.304980 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:31:06.307605 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:31:06.318949 disk-uuid[504]: Primary Header is updated. Jan 13 20:31:06.318949 disk-uuid[504]: Secondary Entries is updated. Jan 13 20:31:06.318949 disk-uuid[504]: Secondary Header is updated. Jan 13 20:31:06.329681 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:31:06.340882 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:31:07.348890 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:31:07.349918 disk-uuid[505]: The operation has completed successfully. Jan 13 20:31:07.428030 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:31:07.428166 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:31:07.454975 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:31:07.472393 sh[524]: Success Jan 13 20:31:07.506841 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 13 20:31:07.602226 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:31:07.610963 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:31:07.614085 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:31:07.636876 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:31:07.636961 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:31:07.639865 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:31:07.644872 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:31:07.648611 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:31:07.666020 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:31:07.668051 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:31:07.679076 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:31:07.683497 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:31:07.696890 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:31:07.696982 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:31:07.697014 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:31:07.702876 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:31:07.713564 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:31:07.716942 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:31:07.727935 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:31:07.734168 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:31:07.819500 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:31:07.827288 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:31:07.887287 systemd-networkd[708]: lo: Link UP Jan 13 20:31:07.887296 systemd-networkd[708]: lo: Gained carrier Jan 13 20:31:07.888437 systemd-networkd[708]: Enumeration completed Jan 13 20:31:07.889097 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:07.889101 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:31:07.889393 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:31:07.890187 systemd-networkd[708]: eth0: Link UP Jan 13 20:31:07.890191 systemd-networkd[708]: eth0: Gained carrier Jan 13 20:31:07.890199 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:07.890660 systemd[1]: Reached target network.target - Network. Jan 13 20:31:07.904899 systemd-networkd[708]: eth0: DHCPv4 address 172.24.4.69/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 20:31:07.913965 ignition[605]: Ignition 2.20.0 Jan 13 20:31:07.913977 ignition[605]: Stage: fetch-offline Jan 13 20:31:07.915560 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:31:07.914015 ignition[605]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:07.914026 ignition[605]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:07.914123 ignition[605]: parsed url from cmdline: "" Jan 13 20:31:07.914127 ignition[605]: no config URL provided Jan 13 20:31:07.914133 ignition[605]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:31:07.914142 ignition[605]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:31:07.914147 ignition[605]: failed to fetch config: resource requires networking Jan 13 20:31:07.914549 ignition[605]: Ignition finished successfully Jan 13 20:31:07.923992 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:31:07.936722 ignition[718]: Ignition 2.20.0 Jan 13 20:31:07.936734 ignition[718]: Stage: fetch Jan 13 20:31:07.936930 ignition[718]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:07.936942 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:07.937037 ignition[718]: parsed url from cmdline: "" Jan 13 20:31:07.937041 ignition[718]: no config URL provided Jan 13 20:31:07.937047 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:31:07.937055 ignition[718]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:31:07.937132 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 20:31:07.937163 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 20:31:07.937195 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 20:31:08.304990 ignition[718]: GET result: OK Jan 13 20:31:08.306090 ignition[718]: parsing config with SHA512: d7e959c3682ca85f91e14cf5b11305e494d468b86f5ee841a45397af3a64738c16d960353c67773e4891086a66360edb92108d3b9be441ed35cb9ce4f43ef8af Jan 13 20:31:08.318956 unknown[718]: fetched base config from "system" Jan 13 20:31:08.318993 unknown[718]: fetched base config from "system" Jan 13 20:31:08.320315 ignition[718]: fetch: fetch complete Jan 13 20:31:08.319013 unknown[718]: fetched user config from "openstack" Jan 13 20:31:08.320332 ignition[718]: fetch: fetch passed Jan 13 20:31:08.324543 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:31:08.320485 ignition[718]: Ignition finished successfully Jan 13 20:31:08.344124 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:31:08.373742 ignition[725]: Ignition 2.20.0 Jan 13 20:31:08.373774 ignition[725]: Stage: kargs Jan 13 20:31:08.374269 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:08.374296 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:08.377316 ignition[725]: kargs: kargs passed Jan 13 20:31:08.381251 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:31:08.377413 ignition[725]: Ignition finished successfully Jan 13 20:31:08.389159 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:31:08.429185 ignition[731]: Ignition 2.20.0 Jan 13 20:31:08.429214 ignition[731]: Stage: disks Jan 13 20:31:08.429621 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:08.429648 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:08.434179 ignition[731]: disks: disks passed Jan 13 20:31:08.436201 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:31:08.434290 ignition[731]: Ignition finished successfully Jan 13 20:31:08.439359 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:31:08.441274 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:31:08.443766 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:31:08.446654 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:31:08.449640 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:31:08.462133 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:31:08.491623 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:31:08.501944 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:31:08.510019 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:31:08.676831 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:31:08.676030 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:31:08.677602 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:31:08.688902 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:31:08.691889 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:31:08.693189 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:31:08.695969 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 20:31:08.697377 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:31:08.698349 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:31:08.709847 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (747) Jan 13 20:31:08.716840 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:31:08.725834 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:31:08.725884 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:31:08.726525 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:31:08.748925 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:31:08.748985 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:31:08.755968 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:31:08.842524 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:31:08.848219 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:31:08.856103 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:31:08.863385 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:31:08.950381 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:31:08.954915 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:31:08.958036 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:31:08.965144 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:31:08.967163 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:31:08.999526 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:31:09.003584 ignition[863]: INFO : Ignition 2.20.0 Jan 13 20:31:09.003584 ignition[863]: INFO : Stage: mount Jan 13 20:31:09.003584 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:09.003584 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:09.010407 ignition[863]: INFO : mount: mount passed Jan 13 20:31:09.010407 ignition[863]: INFO : Ignition finished successfully Jan 13 20:31:09.005171 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:31:09.250289 systemd-networkd[708]: eth0: Gained IPv6LL Jan 13 20:31:15.919910 coreos-metadata[749]: Jan 13 20:31:15.919 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:31:15.963706 coreos-metadata[749]: Jan 13 20:31:15.963 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:31:15.978286 coreos-metadata[749]: Jan 13 20:31:15.978 INFO Fetch successful Jan 13 20:31:15.978286 coreos-metadata[749]: Jan 13 20:31:15.978 INFO wrote hostname ci-4186-1-0-0-dbcf9e2b85.novalocal to /sysroot/etc/hostname Jan 13 20:31:15.981744 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 20:31:15.982000 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 20:31:15.996007 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:31:16.017124 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:31:16.046930 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (881) Jan 13 20:31:16.059893 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:31:16.059979 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:31:16.060024 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:31:16.071898 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:31:16.077863 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:31:16.129672 ignition[899]: INFO : Ignition 2.20.0 Jan 13 20:31:16.129672 ignition[899]: INFO : Stage: files Jan 13 20:31:16.133550 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:16.133550 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:16.136440 ignition[899]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:31:16.136440 ignition[899]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:31:16.136440 ignition[899]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:31:16.145472 ignition[899]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:31:16.146506 ignition[899]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:31:16.147721 unknown[899]: wrote ssh authorized keys file for user: core Jan 13 20:31:16.148607 ignition[899]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:31:16.152090 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:31:16.153318 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:31:16.231128 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:31:16.543758 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:31:16.543758 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:31:16.546535 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:31:16.546535 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:31:16.546535 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:31:16.546535 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:31:16.546535 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:31:16.546535 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:31:16.546535 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:31:16.546535 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:31:16.546535 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:31:16.546535 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:31:16.546535 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:31:16.546535 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:31:16.546535 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 20:31:17.196859 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:31:19.255212 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:31:19.255212 ignition[899]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:31:19.261723 ignition[899]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:31:19.261723 ignition[899]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:31:19.261723 ignition[899]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:31:19.261723 ignition[899]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:31:19.261723 ignition[899]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:31:19.261723 ignition[899]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:31:19.261723 ignition[899]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:31:19.261723 ignition[899]: INFO : files: files passed Jan 13 20:31:19.261723 ignition[899]: INFO : Ignition finished successfully Jan 13 20:31:19.262844 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:31:19.271991 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:31:19.273536 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:31:19.282424 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:31:19.282566 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:31:19.294549 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:31:19.296549 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:31:19.296549 initrd-setup-root-after-ignition[928]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:31:19.298670 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:31:19.299515 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:31:19.308986 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:31:19.342804 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:31:19.343045 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:31:19.345469 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:31:19.355609 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:31:19.357404 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:31:19.365080 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:31:19.379288 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:31:19.386067 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:31:19.402217 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:31:19.403546 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:31:19.406708 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:31:19.409504 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:31:19.409784 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:31:19.412960 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:31:19.414772 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:31:19.417658 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:31:19.420147 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:31:19.422632 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:31:19.425568 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:31:19.429586 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:31:19.433884 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:31:19.437174 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:31:19.440072 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:31:19.444894 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:31:19.445187 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:31:19.448331 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:31:19.450345 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:31:19.452723 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:31:19.455150 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:31:19.457437 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:31:19.457849 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:31:19.461305 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:31:19.461702 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:31:19.467697 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:31:19.468050 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:31:19.482709 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:31:19.487061 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:31:19.487625 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:31:19.487802 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:31:19.494429 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:31:19.494596 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:31:19.503901 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:31:19.504062 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:31:19.513502 ignition[952]: INFO : Ignition 2.20.0 Jan 13 20:31:19.513502 ignition[952]: INFO : Stage: umount Jan 13 20:31:19.514798 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:19.514798 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:19.514798 ignition[952]: INFO : umount: umount passed Jan 13 20:31:19.514798 ignition[952]: INFO : Ignition finished successfully Jan 13 20:31:19.515696 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:31:19.515827 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:31:19.517286 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:31:19.517370 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:31:19.518029 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:31:19.518076 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:31:19.519051 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:31:19.519092 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:31:19.520011 systemd[1]: Stopped target network.target - Network. Jan 13 20:31:19.521060 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:31:19.521106 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:31:19.522131 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:31:19.523081 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:31:19.526842 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:31:19.527652 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:31:19.528598 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:31:19.529763 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:31:19.529801 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:31:19.531886 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:31:19.531924 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:31:19.533059 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:31:19.533111 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:31:19.534066 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:31:19.534106 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:31:19.535283 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:31:19.538548 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:31:19.547424 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:31:19.547531 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:31:19.548179 systemd-networkd[708]: eth0: DHCPv6 lease lost Jan 13 20:31:19.550617 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:31:19.550728 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:31:19.552636 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:31:19.552926 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:31:19.557929 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:31:19.559204 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:31:19.559932 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:31:19.560519 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:31:19.560562 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:31:19.561143 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:31:19.561184 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:31:19.562417 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:31:19.562458 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:31:19.563612 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:31:19.577069 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:31:19.577215 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:31:19.578702 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:31:19.578762 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:31:19.580162 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:31:19.580195 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:31:19.581419 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:31:19.581463 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:31:19.583198 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:31:19.583242 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:31:19.584474 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:31:19.584517 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:31:19.596140 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:31:19.596683 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:31:19.596738 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:31:19.597337 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:31:19.597378 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:31:19.598003 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:31:19.598043 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:31:19.599184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:31:19.599222 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:19.600796 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:31:19.600957 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:31:19.604626 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:31:19.604708 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:31:19.612451 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:31:19.685359 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:31:19.685584 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:31:19.688944 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:31:19.690761 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:31:19.690923 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:31:19.707161 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:31:19.724164 systemd[1]: Switching root. Jan 13 20:31:19.766565 systemd-journald[185]: Journal stopped Jan 13 20:31:21.194839 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jan 13 20:31:21.194894 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:31:21.194913 kernel: SELinux: policy capability open_perms=1 Jan 13 20:31:21.194925 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:31:21.194938 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:31:21.194950 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:31:21.194967 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:31:21.194980 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:31:21.194992 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:31:21.195006 kernel: audit: type=1403 audit(1736800280.141:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:31:21.195019 systemd[1]: Successfully loaded SELinux policy in 69.962ms. Jan 13 20:31:21.200741 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.657ms. Jan 13 20:31:21.200762 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:31:21.200776 systemd[1]: Detected virtualization kvm. Jan 13 20:31:21.200794 systemd[1]: Detected architecture x86-64. Jan 13 20:31:21.200844 systemd[1]: Detected first boot. Jan 13 20:31:21.200862 systemd[1]: Hostname set to . Jan 13 20:31:21.200875 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:31:21.200888 zram_generator::config[994]: No configuration found. Jan 13 20:31:21.200903 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:31:21.200917 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:31:21.200930 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:31:21.200946 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:31:21.200961 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:31:21.200974 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:31:21.200999 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:31:21.201013 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:31:21.201027 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:31:21.201041 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:31:21.201054 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:31:21.201067 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:31:21.201084 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:31:21.201098 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:31:21.201115 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:31:21.201128 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:31:21.201142 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:31:21.201156 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:31:21.201170 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:31:21.201183 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:31:21.201196 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:31:21.201212 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:31:21.201226 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:31:21.201239 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:31:21.201253 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:31:21.201266 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:31:21.201279 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:31:21.201295 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:31:21.201308 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:31:21.201322 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:31:21.201335 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:31:21.201349 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:31:21.201362 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:31:21.201375 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:31:21.201388 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:31:21.201402 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:31:21.201418 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:31:21.201434 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:31:21.201448 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:31:21.201462 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:31:21.201475 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:31:21.201489 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:31:21.201503 systemd[1]: Reached target machines.target - Containers. Jan 13 20:31:21.201516 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:31:21.201529 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:31:21.201544 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:31:21.201558 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:31:21.201571 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:31:21.201584 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:31:21.201598 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:31:21.201611 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:31:21.201624 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:31:21.201638 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:31:21.201653 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:31:21.201667 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:31:21.201680 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:31:21.201693 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:31:21.201706 kernel: loop: module loaded Jan 13 20:31:21.201720 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:31:21.201734 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:31:21.201747 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:31:21.201760 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:31:21.201776 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:31:21.201790 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:31:21.201803 systemd[1]: Stopped verity-setup.service. Jan 13 20:31:21.201841 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:31:21.201855 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:31:21.201892 systemd-journald[1087]: Collecting audit messages is disabled. Jan 13 20:31:21.201925 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:31:21.201942 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:31:21.201955 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:31:21.201969 systemd-journald[1087]: Journal started Jan 13 20:31:21.201997 systemd-journald[1087]: Runtime Journal (/run/log/journal/c181c7a01b954f89806535e1b8d58508) is 8.0M, max 78.3M, 70.3M free. Jan 13 20:31:20.847995 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:31:20.868754 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:31:20.869118 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:31:21.210075 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:31:21.213897 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:31:21.214550 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:31:21.217204 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:31:21.218047 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:31:21.219166 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:31:21.220742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:31:21.221931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:31:21.224102 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:31:21.224222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:31:21.225005 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:31:21.225120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:31:21.225857 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:31:21.227047 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:31:21.228339 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:31:21.241430 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:31:21.248047 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:31:21.249370 kernel: fuse: init (API version 7.39) Jan 13 20:31:21.249626 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:31:21.249665 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:31:21.251643 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:31:21.254958 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:31:21.255917 kernel: ACPI: bus type drm_connector registered Jan 13 20:31:21.257974 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:31:21.259973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:31:21.268977 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:31:21.272097 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:31:21.274916 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:31:21.281966 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:31:21.282552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:31:21.287014 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:31:21.293009 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:31:21.302051 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:31:21.304383 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:31:21.306326 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:31:21.306469 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:31:21.307245 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:31:21.307362 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:31:21.308576 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:31:21.310562 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:31:21.325010 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:31:21.335002 systemd-journald[1087]: Time spent on flushing to /var/log/journal/c181c7a01b954f89806535e1b8d58508 is 73.423ms for 944 entries. Jan 13 20:31:21.335002 systemd-journald[1087]: System Journal (/var/log/journal/c181c7a01b954f89806535e1b8d58508) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:31:21.440095 systemd-journald[1087]: Received client request to flush runtime journal. Jan 13 20:31:21.440147 kernel: loop0: detected capacity change from 0 to 8 Jan 13 20:31:21.440183 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:31:21.440205 kernel: loop1: detected capacity change from 0 to 138184 Jan 13 20:31:21.332637 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:31:21.354499 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:31:21.355271 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:31:21.359201 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:31:21.397492 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:31:21.406303 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:31:21.412988 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:31:21.429081 udevadm[1140]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:31:21.443074 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:31:21.465887 systemd-tmpfiles[1124]: ACLs are not supported, ignoring. Jan 13 20:31:21.465908 systemd-tmpfiles[1124]: ACLs are not supported, ignoring. Jan 13 20:31:21.473340 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:31:21.484474 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:31:21.485886 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:31:21.486958 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:31:21.515859 kernel: loop2: detected capacity change from 0 to 210664 Jan 13 20:31:21.541976 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:31:21.547030 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:31:21.581728 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Jan 13 20:31:21.582119 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Jan 13 20:31:21.585852 kernel: loop3: detected capacity change from 0 to 141000 Jan 13 20:31:21.590012 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:31:21.682860 kernel: loop4: detected capacity change from 0 to 8 Jan 13 20:31:21.688852 kernel: loop5: detected capacity change from 0 to 138184 Jan 13 20:31:21.742850 kernel: loop6: detected capacity change from 0 to 210664 Jan 13 20:31:21.789472 kernel: loop7: detected capacity change from 0 to 141000 Jan 13 20:31:21.862244 (sd-merge)[1156]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 20:31:21.863650 (sd-merge)[1156]: Merged extensions into '/usr'. Jan 13 20:31:21.883455 systemd[1]: Reloading requested from client PID 1123 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:31:21.883482 systemd[1]: Reloading... Jan 13 20:31:21.995479 zram_generator::config[1178]: No configuration found. Jan 13 20:31:22.222989 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:31:22.249366 ldconfig[1118]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:31:22.282805 systemd[1]: Reloading finished in 398 ms. Jan 13 20:31:22.312408 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:31:22.313342 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:31:22.314785 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:31:22.325977 systemd[1]: Starting ensure-sysext.service... Jan 13 20:31:22.327979 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:31:22.333965 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:31:22.350299 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:31:22.350580 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:31:22.351419 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:31:22.351715 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 13 20:31:22.351780 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 13 20:31:22.358025 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:31:22.358155 systemd[1]: Reloading... Jan 13 20:31:22.358600 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:31:22.358608 systemd-tmpfiles[1240]: Skipping /boot Jan 13 20:31:22.382930 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:31:22.382944 systemd-tmpfiles[1240]: Skipping /boot Jan 13 20:31:22.400524 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Jan 13 20:31:22.459901 zram_generator::config[1268]: No configuration found. Jan 13 20:31:22.583837 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:31:22.595838 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1276) Jan 13 20:31:22.602869 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:31:22.606870 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 13 20:31:22.680842 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:31:22.701842 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:31:22.715382 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:31:22.726557 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 13 20:31:22.726637 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 13 20:31:22.732831 kernel: Console: switching to colour dummy device 80x25 Jan 13 20:31:22.732873 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 20:31:22.732892 kernel: [drm] features: -context_init Jan 13 20:31:22.734836 kernel: [drm] number of scanouts: 1 Jan 13 20:31:22.734875 kernel: [drm] number of cap sets: 0 Jan 13 20:31:22.738833 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 13 20:31:22.751360 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 13 20:31:22.751466 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:31:22.755440 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 20:31:22.792823 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:31:22.793578 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:31:22.794105 systemd[1]: Reloading finished in 433 ms. Jan 13 20:31:22.808972 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:31:22.814302 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:31:22.847621 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:31:22.855038 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:31:22.865155 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:31:22.867419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:31:22.874049 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:31:22.883464 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:31:22.890508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:31:22.893112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:31:22.902530 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:31:22.907138 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:31:22.911501 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:31:22.922116 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:31:22.931216 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:31:22.940075 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:22.940184 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:31:22.942187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:31:22.942393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:31:22.945130 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:31:22.945926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:31:22.950317 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:31:22.950534 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:31:22.957674 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:31:22.957998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:31:22.967350 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:31:22.969222 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:31:22.970973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:31:22.976968 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:31:22.978702 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:31:22.982301 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:31:22.992682 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:31:22.993053 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:31:23.002260 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:31:23.010332 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:31:23.014331 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:31:23.015729 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:31:23.020432 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:31:23.024373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:31:23.024577 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:23.030022 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:31:23.030168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:31:23.038519 systemd[1]: Finished ensure-sysext.service. Jan 13 20:31:23.045251 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:31:23.052284 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:31:23.059332 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:31:23.065367 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:31:23.066730 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:31:23.078035 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:31:23.078883 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:31:23.087010 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:31:23.087167 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:31:23.093878 augenrules[1407]: No rules Jan 13 20:31:23.094026 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:31:23.096723 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:31:23.098799 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:31:23.121940 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:31:23.123506 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:31:23.123570 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:31:23.128992 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:31:23.138035 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:31:23.147899 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:31:23.149648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:23.150256 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:31:23.173248 systemd-networkd[1371]: lo: Link UP Jan 13 20:31:23.173517 systemd-networkd[1371]: lo: Gained carrier Jan 13 20:31:23.174893 systemd-networkd[1371]: Enumeration completed Jan 13 20:31:23.175033 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:31:23.184006 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:23.184015 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:31:23.185007 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:31:23.190406 systemd-networkd[1371]: eth0: Link UP Jan 13 20:31:23.190413 systemd-networkd[1371]: eth0: Gained carrier Jan 13 20:31:23.190444 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:23.199130 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:31:23.204871 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:31:23.205631 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:31:23.209538 systemd-networkd[1371]: eth0: DHCPv4 address 172.24.4.69/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 20:31:23.216982 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:31:23.230071 systemd-resolved[1373]: Positive Trust Anchors: Jan 13 20:31:23.230091 systemd-resolved[1373]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:31:23.230134 systemd-resolved[1373]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:31:23.234225 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:31:23.238016 systemd-resolved[1373]: Using system hostname 'ci-4186-1-0-0-dbcf9e2b85.novalocal'. Jan 13 20:31:23.241653 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:31:23.243989 systemd[1]: Reached target network.target - Network. Jan 13 20:31:23.244517 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:31:23.259975 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:31:23.267688 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:31:23.270579 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:31:23.299722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:23.302379 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:31:23.303917 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:31:23.305871 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:31:23.307443 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:31:23.309043 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:31:23.310475 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:31:23.312097 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:31:23.312125 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:31:23.314174 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:31:23.317953 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:31:23.321539 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:31:23.325065 systemd-timesyncd[1421]: Contacted time server 194.57.169.1:123 (0.flatcar.pool.ntp.org). Jan 13 20:31:23.325163 systemd-timesyncd[1421]: Initial clock synchronization to Mon 2025-01-13 20:31:23.498029 UTC. Jan 13 20:31:23.334027 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:31:23.337141 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:31:23.339194 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:31:23.339899 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:31:23.342182 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:31:23.342302 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:31:23.346886 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:31:23.351662 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:31:23.357019 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:31:23.374320 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:31:23.378869 jq[1443]: false Jan 13 20:31:23.382629 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:31:23.383386 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:31:23.392327 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:31:23.399946 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:31:23.404995 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:31:23.415054 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:31:23.428647 extend-filesystems[1444]: Found loop4 Jan 13 20:31:23.428647 extend-filesystems[1444]: Found loop5 Jan 13 20:31:23.428647 extend-filesystems[1444]: Found loop6 Jan 13 20:31:23.428647 extend-filesystems[1444]: Found loop7 Jan 13 20:31:23.428647 extend-filesystems[1444]: Found vda Jan 13 20:31:23.428647 extend-filesystems[1444]: Found vda1 Jan 13 20:31:23.428647 extend-filesystems[1444]: Found vda2 Jan 13 20:31:23.428647 extend-filesystems[1444]: Found vda3 Jan 13 20:31:23.428647 extend-filesystems[1444]: Found usr Jan 13 20:31:23.428647 extend-filesystems[1444]: Found vda4 Jan 13 20:31:23.428647 extend-filesystems[1444]: Found vda6 Jan 13 20:31:23.428647 extend-filesystems[1444]: Found vda7 Jan 13 20:31:23.428647 extend-filesystems[1444]: Found vda9 Jan 13 20:31:23.428647 extend-filesystems[1444]: Checking size of /dev/vda9 Jan 13 20:31:23.428095 dbus-daemon[1440]: [system] SELinux support is enabled Jan 13 20:31:23.444305 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:31:23.454849 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:31:23.455386 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:31:23.464183 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:31:23.478960 extend-filesystems[1444]: Resized partition /dev/vda9 Jan 13 20:31:23.477992 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:31:23.490065 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:31:23.482763 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:31:23.495980 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:31:23.496151 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:31:23.496440 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:31:23.496583 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:31:23.502854 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 13 20:31:23.512840 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 13 20:31:23.566494 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1279) Jan 13 20:31:23.566538 update_engine[1462]: I20250113 20:31:23.551990 1462 main.cc:92] Flatcar Update Engine starting Jan 13 20:31:23.566538 update_engine[1462]: I20250113 20:31:23.559458 1462 update_check_scheduler.cc:74] Next update check in 4m10s Jan 13 20:31:23.566842 jq[1464]: true Jan 13 20:31:23.515339 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:31:23.515510 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:31:23.529312 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:31:23.567278 jq[1470]: true Jan 13 20:31:23.529345 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:31:23.534642 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:31:23.534676 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:31:23.554472 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:31:23.559399 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:31:23.570045 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:31:23.570045 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:31:23.570045 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 13 20:31:23.585214 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Jan 13 20:31:23.573070 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:31:23.588177 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:31:23.589251 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:31:23.598903 tar[1468]: linux-amd64/helm Jan 13 20:31:23.626270 systemd-logind[1457]: New seat seat0. Jan 13 20:31:23.633686 systemd-logind[1457]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:31:23.633713 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:31:23.633966 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:31:23.676178 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:31:23.676358 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:31:23.688067 systemd[1]: Starting sshkeys.service... Jan 13 20:31:23.728899 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:31:23.739180 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:31:23.799737 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:31:23.835943 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:31:24.020747 containerd[1476]: time="2025-01-13T20:31:24.020614811Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:31:24.090650 containerd[1476]: time="2025-01-13T20:31:24.090586228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:24.093123 containerd[1476]: time="2025-01-13T20:31:24.092999826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:24.093123 containerd[1476]: time="2025-01-13T20:31:24.093052693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:31:24.093123 containerd[1476]: time="2025-01-13T20:31:24.093076808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:31:24.093582 containerd[1476]: time="2025-01-13T20:31:24.093529824Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:31:24.093751 containerd[1476]: time="2025-01-13T20:31:24.093555525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:24.093943 containerd[1476]: time="2025-01-13T20:31:24.093875223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:24.093943 containerd[1476]: time="2025-01-13T20:31:24.093902050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:24.094376 containerd[1476]: time="2025-01-13T20:31:24.094229547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:24.094376 containerd[1476]: time="2025-01-13T20:31:24.094251687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:24.094376 containerd[1476]: time="2025-01-13T20:31:24.094287225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:24.094376 containerd[1476]: time="2025-01-13T20:31:24.094301627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:24.094687 containerd[1476]: time="2025-01-13T20:31:24.094562183Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:24.095014 containerd[1476]: time="2025-01-13T20:31:24.094995874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:24.095265 containerd[1476]: time="2025-01-13T20:31:24.095216164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:24.095265 containerd[1476]: time="2025-01-13T20:31:24.095237853Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:31:24.095530 containerd[1476]: time="2025-01-13T20:31:24.095461992Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:31:24.095702 containerd[1476]: time="2025-01-13T20:31:24.095618913Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:31:24.102563 containerd[1476]: time="2025-01-13T20:31:24.102521442Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.102730299Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.102767669Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.102803177Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.102839553Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.103014960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.103275630Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.103383441Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.103405417Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.103421773Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.103439102Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.103456441Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.103471232Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.103487619Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:31:24.104750 containerd[1476]: time="2025-01-13T20:31:24.103504138Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103518981Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103533710Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103551294Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103575716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103590957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103606126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103625952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103641316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103656495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103670651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103687397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103703139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103737182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105090 containerd[1476]: time="2025-01-13T20:31:24.103758922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.103774512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.103788810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.103806109Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.103859467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.103880019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.103893684Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.103937851Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.103956531Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.103969489Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.103983870Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.103996736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.104010646Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.104024658Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:31:24.105393 containerd[1476]: time="2025-01-13T20:31:24.104041690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:31:24.105674 containerd[1476]: time="2025-01-13T20:31:24.104336188Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:31:24.105674 containerd[1476]: time="2025-01-13T20:31:24.104392791Z" level=info msg="Connect containerd service" Jan 13 20:31:24.105674 containerd[1476]: time="2025-01-13T20:31:24.104428165Z" level=info msg="using legacy CRI server" Jan 13 20:31:24.105674 containerd[1476]: time="2025-01-13T20:31:24.104436189Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:31:24.105674 containerd[1476]: time="2025-01-13T20:31:24.104577676Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:31:24.109233 containerd[1476]: time="2025-01-13T20:31:24.109207366Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:31:24.109654 containerd[1476]: time="2025-01-13T20:31:24.109611917Z" level=info msg="Start subscribing containerd event" Jan 13 20:31:24.110184 containerd[1476]: time="2025-01-13T20:31:24.110168414Z" level=info msg="Start recovering state" Jan 13 20:31:24.110346 containerd[1476]: time="2025-01-13T20:31:24.110313606Z" level=info msg="Start event monitor" Jan 13 20:31:24.110435 containerd[1476]: time="2025-01-13T20:31:24.110398009Z" level=info msg="Start snapshots syncer" Jan 13 20:31:24.110528 containerd[1476]: time="2025-01-13T20:31:24.110512832Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:31:24.110603 containerd[1476]: time="2025-01-13T20:31:24.110572935Z" level=info msg="Start streaming server" Jan 13 20:31:24.111041 containerd[1476]: time="2025-01-13T20:31:24.110997056Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:31:24.111236 containerd[1476]: time="2025-01-13T20:31:24.111219311Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:31:24.111398 containerd[1476]: time="2025-01-13T20:31:24.111362077Z" level=info msg="containerd successfully booted in 0.091553s" Jan 13 20:31:24.111454 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:31:24.226067 systemd-networkd[1371]: eth0: Gained IPv6LL Jan 13 20:31:24.230721 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:31:24.238099 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:31:24.253304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:31:24.265580 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:31:24.338980 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:31:24.367168 tar[1468]: linux-amd64/LICENSE Jan 13 20:31:24.367382 tar[1468]: linux-amd64/README.md Jan 13 20:31:24.385325 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:31:24.444220 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:31:24.469554 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:31:24.478615 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:31:24.487293 systemd[1]: Started sshd@0-172.24.4.69:22-172.24.4.1:40638.service - OpenSSH per-connection server daemon (172.24.4.1:40638). Jan 13 20:31:24.492760 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:31:24.493096 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:31:24.507303 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:31:24.525823 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:31:24.538362 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:31:24.544026 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:31:24.546166 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:31:25.427248 sshd[1544]: Accepted publickey for core from 172.24.4.1 port 40638 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:25.434904 sshd-session[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:25.477187 systemd-logind[1457]: New session 1 of user core. Jan 13 20:31:25.482448 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:31:25.498266 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:31:25.521225 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:31:25.533213 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:31:25.550238 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:31:25.669641 systemd[1555]: Queued start job for default target default.target. Jan 13 20:31:25.676174 systemd[1555]: Created slice app.slice - User Application Slice. Jan 13 20:31:25.676296 systemd[1555]: Reached target paths.target - Paths. Jan 13 20:31:25.676315 systemd[1555]: Reached target timers.target - Timers. Jan 13 20:31:25.679964 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:31:25.698868 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:31:25.698931 systemd[1555]: Reached target sockets.target - Sockets. Jan 13 20:31:25.698948 systemd[1555]: Reached target basic.target - Basic System. Jan 13 20:31:25.698993 systemd[1555]: Reached target default.target - Main User Target. Jan 13 20:31:25.699022 systemd[1555]: Startup finished in 141ms. Jan 13 20:31:25.699916 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:31:25.712264 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:31:25.889136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:31:25.897518 (kubelet)[1569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:31:26.218551 systemd[1]: Started sshd@1-172.24.4.69:22-172.24.4.1:54650.service - OpenSSH per-connection server daemon (172.24.4.1:54650). Jan 13 20:31:27.309488 kubelet[1569]: E0113 20:31:27.309409 1569 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:31:27.313637 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:31:27.314018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:31:27.314536 systemd[1]: kubelet.service: Consumed 1.862s CPU time. Jan 13 20:31:28.480639 sshd[1576]: Accepted publickey for core from 172.24.4.1 port 54650 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:28.483400 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:28.495583 systemd-logind[1457]: New session 2 of user core. Jan 13 20:31:28.505308 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:31:29.123494 sshd[1584]: Connection closed by 172.24.4.1 port 54650 Jan 13 20:31:29.126031 sshd-session[1576]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:29.140939 systemd[1]: sshd@1-172.24.4.69:22-172.24.4.1:54650.service: Deactivated successfully. Jan 13 20:31:29.144547 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:31:29.146765 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:31:29.155572 systemd[1]: Started sshd@2-172.24.4.69:22-172.24.4.1:54660.service - OpenSSH per-connection server daemon (172.24.4.1:54660). Jan 13 20:31:29.170665 systemd-logind[1457]: Removed session 2. Jan 13 20:31:29.606747 agetty[1551]: failed to open credentials directory Jan 13 20:31:29.607096 agetty[1549]: failed to open credentials directory Jan 13 20:31:29.638617 login[1549]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:31:29.648990 login[1551]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:31:29.652319 systemd-logind[1457]: New session 3 of user core. Jan 13 20:31:29.673307 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:31:29.680930 systemd-logind[1457]: New session 4 of user core. Jan 13 20:31:29.688292 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:31:30.444405 coreos-metadata[1439]: Jan 13 20:31:30.444 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:31:30.453877 sshd[1589]: Accepted publickey for core from 172.24.4.1 port 54660 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:30.456515 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:30.469418 systemd-logind[1457]: New session 5 of user core. Jan 13 20:31:30.483611 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:31:30.493480 coreos-metadata[1439]: Jan 13 20:31:30.493 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 20:31:30.684944 coreos-metadata[1439]: Jan 13 20:31:30.684 INFO Fetch successful Jan 13 20:31:30.684944 coreos-metadata[1439]: Jan 13 20:31:30.684 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:31:30.702046 coreos-metadata[1439]: Jan 13 20:31:30.701 INFO Fetch successful Jan 13 20:31:30.702046 coreos-metadata[1439]: Jan 13 20:31:30.701 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 20:31:30.714762 coreos-metadata[1439]: Jan 13 20:31:30.714 INFO Fetch successful Jan 13 20:31:30.714762 coreos-metadata[1439]: Jan 13 20:31:30.714 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 20:31:30.727648 coreos-metadata[1439]: Jan 13 20:31:30.727 INFO Fetch successful Jan 13 20:31:30.727648 coreos-metadata[1439]: Jan 13 20:31:30.727 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 20:31:30.742140 coreos-metadata[1439]: Jan 13 20:31:30.742 INFO Fetch successful Jan 13 20:31:30.742140 coreos-metadata[1439]: Jan 13 20:31:30.742 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 20:31:30.755136 coreos-metadata[1439]: Jan 13 20:31:30.755 INFO Fetch successful Jan 13 20:31:30.796711 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:31:30.798215 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:31:30.860762 coreos-metadata[1502]: Jan 13 20:31:30.860 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:31:30.902682 coreos-metadata[1502]: Jan 13 20:31:30.902 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 20:31:30.917250 coreos-metadata[1502]: Jan 13 20:31:30.917 INFO Fetch successful Jan 13 20:31:30.917363 coreos-metadata[1502]: Jan 13 20:31:30.917 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:31:30.931292 coreos-metadata[1502]: Jan 13 20:31:30.931 INFO Fetch successful Jan 13 20:31:30.936622 unknown[1502]: wrote ssh authorized keys file for user: core Jan 13 20:31:30.973590 update-ssh-keys[1626]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:31:30.974666 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:31:30.979754 systemd[1]: Finished sshkeys.service. Jan 13 20:31:30.983574 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:31:30.985988 systemd[1]: Startup finished in 1.224s (kernel) + 15.328s (initrd) + 10.913s (userspace) = 27.466s. Jan 13 20:31:31.095989 sshd[1617]: Connection closed by 172.24.4.1 port 54660 Jan 13 20:31:31.097106 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:31.103442 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:31:31.103751 systemd[1]: sshd@2-172.24.4.69:22-172.24.4.1:54660.service: Deactivated successfully. Jan 13 20:31:31.107135 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:31:31.111075 systemd-logind[1457]: Removed session 5. Jan 13 20:31:37.565220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:31:37.576355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:31:37.748189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:31:37.761056 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:31:37.812985 kubelet[1640]: E0113 20:31:37.812901 1640 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:31:37.816080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:31:37.816209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:31:41.259330 systemd[1]: Started sshd@3-172.24.4.69:22-172.24.4.1:36880.service - OpenSSH per-connection server daemon (172.24.4.1:36880). Jan 13 20:31:42.701432 sshd[1649]: Accepted publickey for core from 172.24.4.1 port 36880 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:42.704212 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:42.715335 systemd-logind[1457]: New session 6 of user core. Jan 13 20:31:42.726288 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:31:43.345524 sshd[1651]: Connection closed by 172.24.4.1 port 36880 Jan 13 20:31:43.344094 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:43.358031 systemd[1]: sshd@3-172.24.4.69:22-172.24.4.1:36880.service: Deactivated successfully. Jan 13 20:31:43.362361 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:31:43.364559 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:31:43.372390 systemd[1]: Started sshd@4-172.24.4.69:22-172.24.4.1:36886.service - OpenSSH per-connection server daemon (172.24.4.1:36886). Jan 13 20:31:43.375050 systemd-logind[1457]: Removed session 6. Jan 13 20:31:44.884363 sshd[1656]: Accepted publickey for core from 172.24.4.1 port 36886 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:44.887092 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:44.898680 systemd-logind[1457]: New session 7 of user core. Jan 13 20:31:44.906132 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:31:45.526264 sshd[1658]: Connection closed by 172.24.4.1 port 36886 Jan 13 20:31:45.528724 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:45.540217 systemd[1]: sshd@4-172.24.4.69:22-172.24.4.1:36886.service: Deactivated successfully. Jan 13 20:31:45.543272 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:31:45.545417 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:31:45.552406 systemd[1]: Started sshd@5-172.24.4.69:22-172.24.4.1:39938.service - OpenSSH per-connection server daemon (172.24.4.1:39938). Jan 13 20:31:45.555933 systemd-logind[1457]: Removed session 7. Jan 13 20:31:47.069975 sshd[1663]: Accepted publickey for core from 172.24.4.1 port 39938 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:47.072670 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:47.085329 systemd-logind[1457]: New session 8 of user core. Jan 13 20:31:47.092113 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:31:47.942872 sshd[1665]: Connection closed by 172.24.4.1 port 39938 Jan 13 20:31:47.942666 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:47.954546 systemd[1]: sshd@5-172.24.4.69:22-172.24.4.1:39938.service: Deactivated successfully. Jan 13 20:31:47.957912 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:31:47.960078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:31:47.961971 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:31:47.971241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:31:47.975362 systemd[1]: Started sshd@6-172.24.4.69:22-172.24.4.1:39940.service - OpenSSH per-connection server daemon (172.24.4.1:39940). Jan 13 20:31:47.979806 systemd-logind[1457]: Removed session 8. Jan 13 20:31:48.304080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:31:48.310225 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:31:48.376399 kubelet[1680]: E0113 20:31:48.376359 1680 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:31:48.381391 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:31:48.381721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:31:49.189628 sshd[1671]: Accepted publickey for core from 172.24.4.1 port 39940 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:49.192400 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:49.203523 systemd-logind[1457]: New session 9 of user core. Jan 13 20:31:49.211094 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:31:49.594559 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:31:49.595384 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:31:50.327143 (dockerd)[1708]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:31:50.327279 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:31:50.798220 dockerd[1708]: time="2025-01-13T20:31:50.797754234Z" level=info msg="Starting up" Jan 13 20:31:50.982258 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1377677555-merged.mount: Deactivated successfully. Jan 13 20:31:51.028447 systemd[1]: var-lib-docker-metacopy\x2dcheck672083681-merged.mount: Deactivated successfully. Jan 13 20:31:51.082549 dockerd[1708]: time="2025-01-13T20:31:51.081886609Z" level=info msg="Loading containers: start." Jan 13 20:31:51.301976 kernel: Initializing XFRM netlink socket Jan 13 20:31:51.483414 systemd-networkd[1371]: docker0: Link UP Jan 13 20:31:51.532984 dockerd[1708]: time="2025-01-13T20:31:51.531663377Z" level=info msg="Loading containers: done." Jan 13 20:31:51.582691 dockerd[1708]: time="2025-01-13T20:31:51.582615787Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:31:51.583199 dockerd[1708]: time="2025-01-13T20:31:51.583152820Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:31:51.583610 dockerd[1708]: time="2025-01-13T20:31:51.583565305Z" level=info msg="Daemon has completed initialization" Jan 13 20:31:51.672570 dockerd[1708]: time="2025-01-13T20:31:51.672431122Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:31:51.673887 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:31:51.976664 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3539108957-merged.mount: Deactivated successfully. Jan 13 20:31:53.620295 containerd[1476]: time="2025-01-13T20:31:53.620186784Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 20:31:54.412342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3385555504.mount: Deactivated successfully. Jan 13 20:31:56.504927 containerd[1476]: time="2025-01-13T20:31:56.504083330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:56.508413 containerd[1476]: time="2025-01-13T20:31:56.507825769Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675650" Jan 13 20:31:56.509684 containerd[1476]: time="2025-01-13T20:31:56.509620727Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:56.513129 containerd[1476]: time="2025-01-13T20:31:56.513085929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:56.514472 containerd[1476]: time="2025-01-13T20:31:56.514312223Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.894084356s" Jan 13 20:31:56.514472 containerd[1476]: time="2025-01-13T20:31:56.514343842Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 20:31:56.539391 containerd[1476]: time="2025-01-13T20:31:56.539354059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 20:31:58.632583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:31:58.641010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:31:58.767053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:31:58.772256 (kubelet)[1971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:31:59.108180 containerd[1476]: time="2025-01-13T20:31:59.107957680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:59.113047 containerd[1476]: time="2025-01-13T20:31:59.112620796Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606417" Jan 13 20:31:59.115140 containerd[1476]: time="2025-01-13T20:31:59.114905122Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:59.125854 containerd[1476]: time="2025-01-13T20:31:59.123496652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:59.145579 containerd[1476]: time="2025-01-13T20:31:59.145460709Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.605839718s" Jan 13 20:31:59.145579 containerd[1476]: time="2025-01-13T20:31:59.145548448Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 20:31:59.159258 kubelet[1971]: E0113 20:31:59.159137 1971 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:31:59.164161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:31:59.164493 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:31:59.188333 containerd[1476]: time="2025-01-13T20:31:59.188158793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 20:32:00.980312 containerd[1476]: time="2025-01-13T20:32:00.980136115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:00.982283 containerd[1476]: time="2025-01-13T20:32:00.982233373Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783043" Jan 13 20:32:00.984836 containerd[1476]: time="2025-01-13T20:32:00.983825265Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:00.987988 containerd[1476]: time="2025-01-13T20:32:00.987321544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:00.988764 containerd[1476]: time="2025-01-13T20:32:00.988739167Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.800538013s" Jan 13 20:32:00.988873 containerd[1476]: time="2025-01-13T20:32:00.988855083Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 20:32:01.013929 containerd[1476]: time="2025-01-13T20:32:01.013876608Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 20:32:02.590872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955396062.mount: Deactivated successfully. Jan 13 20:32:03.472593 containerd[1476]: time="2025-01-13T20:32:03.472449787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:03.477028 containerd[1476]: time="2025-01-13T20:32:03.476872789Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Jan 13 20:32:03.479316 containerd[1476]: time="2025-01-13T20:32:03.479173521Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:03.484471 containerd[1476]: time="2025-01-13T20:32:03.484364234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:03.486498 containerd[1476]: time="2025-01-13T20:32:03.486441570Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.472514136s" Jan 13 20:32:03.487481 containerd[1476]: time="2025-01-13T20:32:03.486657962Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 20:32:03.541500 containerd[1476]: time="2025-01-13T20:32:03.541418066Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:32:04.225300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount637612255.mount: Deactivated successfully. Jan 13 20:32:05.451980 containerd[1476]: time="2025-01-13T20:32:05.451761295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:05.456102 containerd[1476]: time="2025-01-13T20:32:05.456047659Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 13 20:32:05.461248 containerd[1476]: time="2025-01-13T20:32:05.461153918Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:05.466374 containerd[1476]: time="2025-01-13T20:32:05.466328047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:05.467790 containerd[1476]: time="2025-01-13T20:32:05.467530050Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.926041677s" Jan 13 20:32:05.467790 containerd[1476]: time="2025-01-13T20:32:05.467572337Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:32:05.490732 containerd[1476]: time="2025-01-13T20:32:05.490698369Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:32:06.117348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3853804672.mount: Deactivated successfully. Jan 13 20:32:06.165862 containerd[1476]: time="2025-01-13T20:32:06.165695861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:06.167980 containerd[1476]: time="2025-01-13T20:32:06.167754143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 13 20:32:06.169943 containerd[1476]: time="2025-01-13T20:32:06.169742601Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:06.175706 containerd[1476]: time="2025-01-13T20:32:06.175550924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:06.178272 containerd[1476]: time="2025-01-13T20:32:06.177972933Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 687.081275ms" Jan 13 20:32:06.178272 containerd[1476]: time="2025-01-13T20:32:06.178047407Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:32:06.232241 containerd[1476]: time="2025-01-13T20:32:06.231711682Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 20:32:06.905159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount930926298.mount: Deactivated successfully. Jan 13 20:32:08.539524 update_engine[1462]: I20250113 20:32:08.538178 1462 update_attempter.cc:509] Updating boot flags... Jan 13 20:32:08.717462 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2082) Jan 13 20:32:09.003867 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2082) Jan 13 20:32:09.194280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 20:32:09.204734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:09.251168 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2082) Jan 13 20:32:09.580201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:09.592692 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:32:09.687975 kubelet[2111]: E0113 20:32:09.687931 2111 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:32:09.689883 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:32:09.690035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:32:10.844149 containerd[1476]: time="2025-01-13T20:32:10.843892781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:10.845769 containerd[1476]: time="2025-01-13T20:32:10.845508159Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 13 20:32:10.847202 containerd[1476]: time="2025-01-13T20:32:10.847139639Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:10.851003 containerd[1476]: time="2025-01-13T20:32:10.850961460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:10.852394 containerd[1476]: time="2025-01-13T20:32:10.852257832Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.620485546s" Jan 13 20:32:10.852394 containerd[1476]: time="2025-01-13T20:32:10.852286681Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 20:32:14.746340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:14.757411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:14.783724 systemd[1]: Reloading requested from client PID 2202 ('systemctl') (unit session-9.scope)... Jan 13 20:32:14.783738 systemd[1]: Reloading... Jan 13 20:32:14.879907 zram_generator::config[2241]: No configuration found. Jan 13 20:32:15.019721 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:32:15.105050 systemd[1]: Reloading finished in 320 ms. Jan 13 20:32:15.166852 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:32:15.166939 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:32:15.167186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:15.173053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:15.276922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:15.288223 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:32:15.462979 kubelet[2309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:32:15.462979 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:32:15.462979 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:32:15.462979 kubelet[2309]: I0113 20:32:15.462761 2309 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:32:16.112576 kubelet[2309]: I0113 20:32:16.112515 2309 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:32:16.112576 kubelet[2309]: I0113 20:32:16.112544 2309 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:32:16.112917 kubelet[2309]: I0113 20:32:16.112751 2309 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:32:16.145539 kubelet[2309]: I0113 20:32:16.145266 2309 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:32:16.145539 kubelet[2309]: E0113 20:32:16.145486 2309 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:16.163562 kubelet[2309]: I0113 20:32:16.163526 2309 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:32:16.164064 kubelet[2309]: I0113 20:32:16.164010 2309 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:32:16.164524 kubelet[2309]: I0113 20:32:16.164070 2309 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-0-dbcf9e2b85.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:32:16.164618 kubelet[2309]: I0113 20:32:16.164554 2309 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:32:16.164618 kubelet[2309]: I0113 20:32:16.164582 2309 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:32:16.164887 kubelet[2309]: I0113 20:32:16.164803 2309 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:32:16.166759 kubelet[2309]: I0113 20:32:16.166722 2309 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:32:16.167035 kubelet[2309]: I0113 20:32:16.166766 2309 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:32:16.167035 kubelet[2309]: I0113 20:32:16.166891 2309 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:32:16.167035 kubelet[2309]: I0113 20:32:16.166932 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:32:16.168674 kubelet[2309]: W0113 20:32:16.168636 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-0-dbcf9e2b85.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:16.169830 kubelet[2309]: E0113 20:32:16.168794 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-0-dbcf9e2b85.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:16.177153 kubelet[2309]: W0113 20:32:16.177063 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:16.177214 kubelet[2309]: E0113 20:32:16.177171 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:16.177788 kubelet[2309]: I0113 20:32:16.177383 2309 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:32:16.180772 kubelet[2309]: I0113 20:32:16.180739 2309 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:32:16.181445 kubelet[2309]: W0113 20:32:16.181110 2309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:32:16.182759 kubelet[2309]: I0113 20:32:16.182668 2309 server.go:1264] "Started kubelet" Jan 13 20:32:16.185847 kubelet[2309]: I0113 20:32:16.185786 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:32:16.190010 kubelet[2309]: E0113 20:32:16.188933 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.69:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.69:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-0-dbcf9e2b85.novalocal.181a5ab42bf18ff4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-0-dbcf9e2b85.novalocal,UID:ci-4186-1-0-0-dbcf9e2b85.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-0-dbcf9e2b85.novalocal,},FirstTimestamp:2025-01-13 20:32:16.182611956 +0000 UTC m=+0.890427509,LastTimestamp:2025-01-13 20:32:16.182611956 +0000 UTC m=+0.890427509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-0-dbcf9e2b85.novalocal,}" Jan 13 20:32:16.190010 kubelet[2309]: I0113 20:32:16.189065 2309 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:32:16.190333 kubelet[2309]: I0113 20:32:16.190319 2309 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:32:16.191350 kubelet[2309]: I0113 20:32:16.191312 2309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:32:16.191586 kubelet[2309]: I0113 20:32:16.191574 2309 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:32:16.193831 kubelet[2309]: I0113 20:32:16.193784 2309 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:32:16.195563 kubelet[2309]: I0113 20:32:16.195505 2309 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:32:16.197479 kubelet[2309]: I0113 20:32:16.195711 2309 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:32:16.199437 kubelet[2309]: W0113 20:32:16.199386 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:16.199518 kubelet[2309]: E0113 20:32:16.199508 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:16.199702 kubelet[2309]: E0113 20:32:16.199677 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-0-dbcf9e2b85.novalocal?timeout=10s\": dial tcp 172.24.4.69:6443: connect: connection refused" interval="200ms" Jan 13 20:32:16.200358 kubelet[2309]: I0113 20:32:16.200342 2309 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:32:16.200523 kubelet[2309]: I0113 20:32:16.200505 2309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:32:16.202917 kubelet[2309]: I0113 20:32:16.202901 2309 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:32:16.222084 kubelet[2309]: I0113 20:32:16.222053 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:32:16.223029 kubelet[2309]: I0113 20:32:16.223013 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:32:16.223107 kubelet[2309]: I0113 20:32:16.223098 2309 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:32:16.223175 kubelet[2309]: I0113 20:32:16.223166 2309 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:32:16.223268 kubelet[2309]: E0113 20:32:16.223251 2309 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:32:16.233192 kubelet[2309]: W0113 20:32:16.233103 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:16.233329 kubelet[2309]: E0113 20:32:16.233216 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:16.243965 kubelet[2309]: I0113 20:32:16.243920 2309 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:32:16.243965 kubelet[2309]: I0113 20:32:16.243958 2309 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:32:16.244072 kubelet[2309]: I0113 20:32:16.243995 2309 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:32:16.248454 kubelet[2309]: I0113 20:32:16.248414 2309 policy_none.go:49] "None policy: Start" Jan 13 20:32:16.249713 kubelet[2309]: I0113 20:32:16.249673 2309 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:32:16.249768 kubelet[2309]: I0113 20:32:16.249724 2309 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:32:16.262715 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:32:16.278438 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:32:16.283475 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:32:16.290915 kubelet[2309]: I0113 20:32:16.290873 2309 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:32:16.291210 kubelet[2309]: I0113 20:32:16.291033 2309 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:32:16.291210 kubelet[2309]: I0113 20:32:16.291130 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:32:16.295622 kubelet[2309]: E0113 20:32:16.294103 2309 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-0-dbcf9e2b85.novalocal\" not found" Jan 13 20:32:16.296263 kubelet[2309]: I0113 20:32:16.295914 2309 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.297085 kubelet[2309]: E0113 20:32:16.297030 2309 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.69:6443/api/v1/nodes\": dial tcp 172.24.4.69:6443: connect: connection refused" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.324123 kubelet[2309]: I0113 20:32:16.324071 2309 topology_manager.go:215] "Topology Admit Handler" podUID="dcf9ecafc35b18e23eeea0a14da7c0c3" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.326423 kubelet[2309]: I0113 20:32:16.326256 2309 topology_manager.go:215] "Topology Admit Handler" podUID="33cfeff1eb3f467cef16fe09f1998241" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.329232 kubelet[2309]: I0113 20:32:16.328526 2309 topology_manager.go:215] "Topology Admit Handler" podUID="ec2ce36e44c884beb87ffee974afa407" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.341053 systemd[1]: Created slice kubepods-burstable-poddcf9ecafc35b18e23eeea0a14da7c0c3.slice - libcontainer container kubepods-burstable-poddcf9ecafc35b18e23eeea0a14da7c0c3.slice. Jan 13 20:32:16.366866 systemd[1]: Created slice kubepods-burstable-pod33cfeff1eb3f467cef16fe09f1998241.slice - libcontainer container kubepods-burstable-pod33cfeff1eb3f467cef16fe09f1998241.slice. Jan 13 20:32:16.379369 systemd[1]: Created slice kubepods-burstable-podec2ce36e44c884beb87ffee974afa407.slice - libcontainer container kubepods-burstable-podec2ce36e44c884beb87ffee974afa407.slice. Jan 13 20:32:16.401556 kubelet[2309]: E0113 20:32:16.401457 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-0-dbcf9e2b85.novalocal?timeout=10s\": dial tcp 172.24.4.69:6443: connect: connection refused" interval="400ms" Jan 13 20:32:16.497213 kubelet[2309]: I0113 20:32:16.497119 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dcf9ecafc35b18e23eeea0a14da7c0c3-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"dcf9ecafc35b18e23eeea0a14da7c0c3\") " pod="kube-system/kube-scheduler-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.497213 kubelet[2309]: I0113 20:32:16.497207 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33cfeff1eb3f467cef16fe09f1998241-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"33cfeff1eb3f467cef16fe09f1998241\") " pod="kube-system/kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.498063 kubelet[2309]: I0113 20:32:16.497259 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33cfeff1eb3f467cef16fe09f1998241-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"33cfeff1eb3f467cef16fe09f1998241\") " pod="kube-system/kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.498063 kubelet[2309]: I0113 20:32:16.497402 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ec2ce36e44c884beb87ffee974afa407-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"ec2ce36e44c884beb87ffee974afa407\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.498063 kubelet[2309]: I0113 20:32:16.497475 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec2ce36e44c884beb87ffee974afa407-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"ec2ce36e44c884beb87ffee974afa407\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.498063 kubelet[2309]: I0113 20:32:16.497519 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33cfeff1eb3f467cef16fe09f1998241-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"33cfeff1eb3f467cef16fe09f1998241\") " pod="kube-system/kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.499657 kubelet[2309]: I0113 20:32:16.497565 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec2ce36e44c884beb87ffee974afa407-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"ec2ce36e44c884beb87ffee974afa407\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.499657 kubelet[2309]: I0113 20:32:16.497609 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec2ce36e44c884beb87ffee974afa407-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"ec2ce36e44c884beb87ffee974afa407\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.499657 kubelet[2309]: I0113 20:32:16.497653 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec2ce36e44c884beb87ffee974afa407-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"ec2ce36e44c884beb87ffee974afa407\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.500891 kubelet[2309]: I0113 20:32:16.500757 2309 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.501731 kubelet[2309]: E0113 20:32:16.501652 2309 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.69:6443/api/v1/nodes\": dial tcp 172.24.4.69:6443: connect: connection refused" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.658880 containerd[1476]: time="2025-01-13T20:32:16.658773494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-0-dbcf9e2b85.novalocal,Uid:dcf9ecafc35b18e23eeea0a14da7c0c3,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:16.676433 containerd[1476]: time="2025-01-13T20:32:16.676365174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal,Uid:33cfeff1eb3f467cef16fe09f1998241,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:16.685465 containerd[1476]: time="2025-01-13T20:32:16.685189936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal,Uid:ec2ce36e44c884beb87ffee974afa407,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:16.803317 kubelet[2309]: E0113 20:32:16.803233 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-0-dbcf9e2b85.novalocal?timeout=10s\": dial tcp 172.24.4.69:6443: connect: connection refused" interval="800ms" Jan 13 20:32:16.905806 kubelet[2309]: I0113 20:32:16.905527 2309 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:16.906429 kubelet[2309]: E0113 20:32:16.906349 2309 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.69:6443/api/v1/nodes\": dial tcp 172.24.4.69:6443: connect: connection refused" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:17.022406 kubelet[2309]: W0113 20:32:17.022155 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:17.022406 kubelet[2309]: E0113 20:32:17.022275 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:17.279274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866047254.mount: Deactivated successfully. Jan 13 20:32:17.293719 containerd[1476]: time="2025-01-13T20:32:17.293590768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:32:17.296698 containerd[1476]: time="2025-01-13T20:32:17.296556425Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:32:17.299520 containerd[1476]: time="2025-01-13T20:32:17.299432425Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 20:32:17.300787 containerd[1476]: time="2025-01-13T20:32:17.300675298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:32:17.304737 containerd[1476]: time="2025-01-13T20:32:17.304201146Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:32:17.306790 containerd[1476]: time="2025-01-13T20:32:17.306557927Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:32:17.307778 containerd[1476]: time="2025-01-13T20:32:17.307153347Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:32:17.316939 containerd[1476]: time="2025-01-13T20:32:17.316795337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:32:17.319629 containerd[1476]: time="2025-01-13T20:32:17.319302225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 642.763366ms" Jan 13 20:32:17.324675 containerd[1476]: time="2025-01-13T20:32:17.324584866Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 665.585773ms" Jan 13 20:32:17.334186 containerd[1476]: time="2025-01-13T20:32:17.334080385Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 648.700471ms" Jan 13 20:32:17.400903 kubelet[2309]: W0113 20:32:17.400720 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:17.400903 kubelet[2309]: E0113 20:32:17.400855 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:17.515801 containerd[1476]: time="2025-01-13T20:32:17.514307565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:32:17.515801 containerd[1476]: time="2025-01-13T20:32:17.514715503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:32:17.515801 containerd[1476]: time="2025-01-13T20:32:17.514735994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:17.515801 containerd[1476]: time="2025-01-13T20:32:17.515491130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:17.517835 containerd[1476]: time="2025-01-13T20:32:17.517449121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:32:17.517835 containerd[1476]: time="2025-01-13T20:32:17.517506545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:32:17.517835 containerd[1476]: time="2025-01-13T20:32:17.517525813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:17.517835 containerd[1476]: time="2025-01-13T20:32:17.517602605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:17.519644 containerd[1476]: time="2025-01-13T20:32:17.519343997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:32:17.519644 containerd[1476]: time="2025-01-13T20:32:17.519392062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:32:17.519644 containerd[1476]: time="2025-01-13T20:32:17.519411330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:17.519644 containerd[1476]: time="2025-01-13T20:32:17.519480427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:17.546968 systemd[1]: Started cri-containerd-35f68764f512e8209d4ecc01d4f63fc4e8c917ecd5cca2a8f2a9e48384ab14a1.scope - libcontainer container 35f68764f512e8209d4ecc01d4f63fc4e8c917ecd5cca2a8f2a9e48384ab14a1. Jan 13 20:32:17.559039 systemd[1]: Started cri-containerd-f151bba86b5bda19f49d82970c3dda9e88c1e7e7f869f5712a4ee944f488bcd5.scope - libcontainer container f151bba86b5bda19f49d82970c3dda9e88c1e7e7f869f5712a4ee944f488bcd5. Jan 13 20:32:17.562979 systemd[1]: Started cri-containerd-5bb0e87527c02e56b62b7637ce20c80efd0edc40d7914985cbd397f9190634a2.scope - libcontainer container 5bb0e87527c02e56b62b7637ce20c80efd0edc40d7914985cbd397f9190634a2. Jan 13 20:32:17.567021 kubelet[2309]: W0113 20:32:17.566165 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-0-dbcf9e2b85.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:17.567021 kubelet[2309]: E0113 20:32:17.566224 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-0-dbcf9e2b85.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:17.604858 kubelet[2309]: E0113 20:32:17.604199 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-0-dbcf9e2b85.novalocal?timeout=10s\": dial tcp 172.24.4.69:6443: connect: connection refused" interval="1.6s" Jan 13 20:32:17.627845 containerd[1476]: time="2025-01-13T20:32:17.627637809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal,Uid:33cfeff1eb3f467cef16fe09f1998241,Namespace:kube-system,Attempt:0,} returns sandbox id \"35f68764f512e8209d4ecc01d4f63fc4e8c917ecd5cca2a8f2a9e48384ab14a1\"" Jan 13 20:32:17.633338 containerd[1476]: time="2025-01-13T20:32:17.633188119Z" level=info msg="CreateContainer within sandbox \"35f68764f512e8209d4ecc01d4f63fc4e8c917ecd5cca2a8f2a9e48384ab14a1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:32:17.639064 kubelet[2309]: W0113 20:32:17.638991 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:17.639064 kubelet[2309]: E0113 20:32:17.639038 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.69:6443: connect: connection refused Jan 13 20:32:17.642051 containerd[1476]: time="2025-01-13T20:32:17.641479051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-0-dbcf9e2b85.novalocal,Uid:dcf9ecafc35b18e23eeea0a14da7c0c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f151bba86b5bda19f49d82970c3dda9e88c1e7e7f869f5712a4ee944f488bcd5\"" Jan 13 20:32:17.643207 containerd[1476]: time="2025-01-13T20:32:17.643119312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal,Uid:ec2ce36e44c884beb87ffee974afa407,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bb0e87527c02e56b62b7637ce20c80efd0edc40d7914985cbd397f9190634a2\"" Jan 13 20:32:17.647033 containerd[1476]: time="2025-01-13T20:32:17.646994222Z" level=info msg="CreateContainer within sandbox \"f151bba86b5bda19f49d82970c3dda9e88c1e7e7f869f5712a4ee944f488bcd5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:32:17.648724 containerd[1476]: time="2025-01-13T20:32:17.648532411Z" level=info msg="CreateContainer within sandbox \"5bb0e87527c02e56b62b7637ce20c80efd0edc40d7914985cbd397f9190634a2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:32:17.682158 containerd[1476]: time="2025-01-13T20:32:17.682113125Z" level=info msg="CreateContainer within sandbox \"35f68764f512e8209d4ecc01d4f63fc4e8c917ecd5cca2a8f2a9e48384ab14a1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"94bff4b1d6471b9bbe1f3752f5f996a039ed0cb19f5080ea9dcc57ca3622bb39\"" Jan 13 20:32:17.683823 containerd[1476]: time="2025-01-13T20:32:17.683730500Z" level=info msg="StartContainer for \"94bff4b1d6471b9bbe1f3752f5f996a039ed0cb19f5080ea9dcc57ca3622bb39\"" Jan 13 20:32:17.708489 kubelet[2309]: I0113 20:32:17.708193 2309 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:17.708489 kubelet[2309]: E0113 20:32:17.708452 2309 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.69:6443/api/v1/nodes\": dial tcp 172.24.4.69:6443: connect: connection refused" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:17.743133 systemd[1]: Started cri-containerd-94bff4b1d6471b9bbe1f3752f5f996a039ed0cb19f5080ea9dcc57ca3622bb39.scope - libcontainer container 94bff4b1d6471b9bbe1f3752f5f996a039ed0cb19f5080ea9dcc57ca3622bb39. Jan 13 20:32:17.758575 containerd[1476]: time="2025-01-13T20:32:17.758371485Z" level=info msg="CreateContainer within sandbox \"f151bba86b5bda19f49d82970c3dda9e88c1e7e7f869f5712a4ee944f488bcd5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"72cb24e550c88182770acfd7a58d86ebed68e048ec7ae02cda6aa9fb34343386\"" Jan 13 20:32:17.760851 containerd[1476]: time="2025-01-13T20:32:17.759692683Z" level=info msg="StartContainer for \"72cb24e550c88182770acfd7a58d86ebed68e048ec7ae02cda6aa9fb34343386\"" Jan 13 20:32:17.801061 containerd[1476]: time="2025-01-13T20:32:17.800881347Z" level=info msg="CreateContainer within sandbox \"5bb0e87527c02e56b62b7637ce20c80efd0edc40d7914985cbd397f9190634a2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a96fc38d7e7ab72c84dbb09ef4f70162cea4046e3e0d157a30c145dde46bffe\"" Jan 13 20:32:17.803086 containerd[1476]: time="2025-01-13T20:32:17.803028913Z" level=info msg="StartContainer for \"0a96fc38d7e7ab72c84dbb09ef4f70162cea4046e3e0d157a30c145dde46bffe\"" Jan 13 20:32:17.805003 systemd[1]: Started cri-containerd-72cb24e550c88182770acfd7a58d86ebed68e048ec7ae02cda6aa9fb34343386.scope - libcontainer container 72cb24e550c88182770acfd7a58d86ebed68e048ec7ae02cda6aa9fb34343386. Jan 13 20:32:17.863542 containerd[1476]: time="2025-01-13T20:32:17.863468281Z" level=info msg="StartContainer for \"94bff4b1d6471b9bbe1f3752f5f996a039ed0cb19f5080ea9dcc57ca3622bb39\" returns successfully" Jan 13 20:32:17.863991 systemd[1]: Started cri-containerd-0a96fc38d7e7ab72c84dbb09ef4f70162cea4046e3e0d157a30c145dde46bffe.scope - libcontainer container 0a96fc38d7e7ab72c84dbb09ef4f70162cea4046e3e0d157a30c145dde46bffe. Jan 13 20:32:17.993898 containerd[1476]: time="2025-01-13T20:32:17.993805591Z" level=info msg="StartContainer for \"72cb24e550c88182770acfd7a58d86ebed68e048ec7ae02cda6aa9fb34343386\" returns successfully" Jan 13 20:32:17.993898 containerd[1476]: time="2025-01-13T20:32:17.993914778Z" level=info msg="StartContainer for \"0a96fc38d7e7ab72c84dbb09ef4f70162cea4046e3e0d157a30c145dde46bffe\" returns successfully" Jan 13 20:32:19.311076 kubelet[2309]: I0113 20:32:19.310580 2309 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:20.549481 kubelet[2309]: E0113 20:32:20.549430 2309 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-0-0-dbcf9e2b85.novalocal\" not found" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:20.692845 kubelet[2309]: I0113 20:32:20.692392 2309 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:21.172807 kubelet[2309]: I0113 20:32:21.172748 2309 apiserver.go:52] "Watching apiserver" Jan 13 20:32:21.195992 kubelet[2309]: I0113 20:32:21.195923 2309 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:32:23.019521 systemd[1]: Reloading requested from client PID 2590 ('systemctl') (unit session-9.scope)... Jan 13 20:32:23.020418 systemd[1]: Reloading... Jan 13 20:32:23.154854 zram_generator::config[2636]: No configuration found. Jan 13 20:32:23.292142 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:32:23.394615 systemd[1]: Reloading finished in 373 ms. Jan 13 20:32:23.439096 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:23.439475 kubelet[2309]: I0113 20:32:23.439331 2309 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:32:23.447068 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:32:23.447245 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:23.447292 systemd[1]: kubelet.service: Consumed 1.307s CPU time, 112.7M memory peak, 0B memory swap peak. Jan 13 20:32:23.453060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:23.569099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:23.570609 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:32:23.658180 kubelet[2693]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:32:23.658180 kubelet[2693]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:32:23.658180 kubelet[2693]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:32:23.658616 kubelet[2693]: I0113 20:32:23.658214 2693 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:32:23.665630 kubelet[2693]: I0113 20:32:23.665588 2693 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:32:23.665630 kubelet[2693]: I0113 20:32:23.665614 2693 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:32:23.665850 kubelet[2693]: I0113 20:32:23.665796 2693 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:32:23.667276 kubelet[2693]: I0113 20:32:23.667256 2693 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:32:23.668526 kubelet[2693]: I0113 20:32:23.668372 2693 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:32:23.675144 kubelet[2693]: I0113 20:32:23.675121 2693 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:32:23.675834 kubelet[2693]: I0113 20:32:23.675527 2693 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:32:23.675834 kubelet[2693]: I0113 20:32:23.675559 2693 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-0-dbcf9e2b85.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:32:23.675834 kubelet[2693]: I0113 20:32:23.675770 2693 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:32:23.675834 kubelet[2693]: I0113 20:32:23.675780 2693 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:32:23.676091 kubelet[2693]: I0113 20:32:23.676077 2693 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:32:23.676247 kubelet[2693]: I0113 20:32:23.676236 2693 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:32:23.676312 kubelet[2693]: I0113 20:32:23.676303 2693 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:32:23.676383 kubelet[2693]: I0113 20:32:23.676374 2693 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:32:23.676457 kubelet[2693]: I0113 20:32:23.676447 2693 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:32:23.679196 kubelet[2693]: I0113 20:32:23.679178 2693 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:32:23.680954 kubelet[2693]: I0113 20:32:23.680940 2693 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:32:23.682845 kubelet[2693]: I0113 20:32:23.681449 2693 server.go:1264] "Started kubelet" Jan 13 20:32:23.686739 kubelet[2693]: I0113 20:32:23.684588 2693 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:32:23.691543 kubelet[2693]: I0113 20:32:23.691514 2693 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:32:23.692535 kubelet[2693]: I0113 20:32:23.692494 2693 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:32:23.694290 kubelet[2693]: I0113 20:32:23.694241 2693 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:32:23.695843 kubelet[2693]: I0113 20:32:23.694559 2693 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:32:23.701678 kubelet[2693]: I0113 20:32:23.697982 2693 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:32:23.701678 kubelet[2693]: I0113 20:32:23.694640 2693 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:32:23.702156 kubelet[2693]: I0113 20:32:23.701748 2693 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:32:23.709494 kubelet[2693]: I0113 20:32:23.709212 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:32:23.710336 kubelet[2693]: I0113 20:32:23.710306 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:32:23.710336 kubelet[2693]: I0113 20:32:23.710334 2693 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:32:23.710454 kubelet[2693]: I0113 20:32:23.710349 2693 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:32:23.710454 kubelet[2693]: E0113 20:32:23.710385 2693 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:32:23.716013 kubelet[2693]: I0113 20:32:23.715989 2693 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:32:23.716239 kubelet[2693]: I0113 20:32:23.716218 2693 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:32:23.717013 kubelet[2693]: E0113 20:32:23.716993 2693 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:32:23.719504 kubelet[2693]: I0113 20:32:23.718904 2693 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:32:23.771828 kubelet[2693]: I0113 20:32:23.771619 2693 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:32:23.771828 kubelet[2693]: I0113 20:32:23.771637 2693 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:32:23.771828 kubelet[2693]: I0113 20:32:23.771655 2693 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:32:23.771828 kubelet[2693]: I0113 20:32:23.771789 2693 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:32:23.772069 kubelet[2693]: I0113 20:32:23.771799 2693 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:32:23.772069 kubelet[2693]: I0113 20:32:23.771857 2693 policy_none.go:49] "None policy: Start" Jan 13 20:32:23.772397 kubelet[2693]: I0113 20:32:23.772285 2693 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:32:23.772397 kubelet[2693]: I0113 20:32:23.772306 2693 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:32:23.772463 kubelet[2693]: I0113 20:32:23.772415 2693 state_mem.go:75] "Updated machine memory state" Jan 13 20:32:23.777794 kubelet[2693]: I0113 20:32:23.777768 2693 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:32:23.778014 kubelet[2693]: I0113 20:32:23.777931 2693 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:32:23.778066 kubelet[2693]: I0113 20:32:23.778042 2693 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:32:23.800838 kubelet[2693]: I0113 20:32:23.800787 2693 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.811090 kubelet[2693]: I0113 20:32:23.811044 2693 topology_manager.go:215] "Topology Admit Handler" podUID="33cfeff1eb3f467cef16fe09f1998241" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.811255 kubelet[2693]: I0113 20:32:23.811133 2693 topology_manager.go:215] "Topology Admit Handler" podUID="ec2ce36e44c884beb87ffee974afa407" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.811255 kubelet[2693]: I0113 20:32:23.811185 2693 topology_manager.go:215] "Topology Admit Handler" podUID="dcf9ecafc35b18e23eeea0a14da7c0c3" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.822339 kubelet[2693]: I0113 20:32:23.820509 2693 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.822580 kubelet[2693]: W0113 20:32:23.822332 2693 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:32:23.822734 kubelet[2693]: I0113 20:32:23.822689 2693 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.829736 kubelet[2693]: W0113 20:32:23.829685 2693 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:32:23.832991 kubelet[2693]: W0113 20:32:23.832962 2693 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:32:23.899315 kubelet[2693]: I0113 20:32:23.899042 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33cfeff1eb3f467cef16fe09f1998241-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"33cfeff1eb3f467cef16fe09f1998241\") " pod="kube-system/kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.899315 kubelet[2693]: I0113 20:32:23.899094 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33cfeff1eb3f467cef16fe09f1998241-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"33cfeff1eb3f467cef16fe09f1998241\") " pod="kube-system/kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.899315 kubelet[2693]: I0113 20:32:23.899125 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ec2ce36e44c884beb87ffee974afa407-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"ec2ce36e44c884beb87ffee974afa407\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.899315 kubelet[2693]: I0113 20:32:23.899149 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec2ce36e44c884beb87ffee974afa407-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"ec2ce36e44c884beb87ffee974afa407\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.899552 kubelet[2693]: I0113 20:32:23.899171 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec2ce36e44c884beb87ffee974afa407-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"ec2ce36e44c884beb87ffee974afa407\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.899552 kubelet[2693]: I0113 20:32:23.899194 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec2ce36e44c884beb87ffee974afa407-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"ec2ce36e44c884beb87ffee974afa407\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.899552 kubelet[2693]: I0113 20:32:23.899216 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33cfeff1eb3f467cef16fe09f1998241-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"33cfeff1eb3f467cef16fe09f1998241\") " pod="kube-system/kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.899552 kubelet[2693]: I0113 20:32:23.899238 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec2ce36e44c884beb87ffee974afa407-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"ec2ce36e44c884beb87ffee974afa407\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:23.899654 kubelet[2693]: I0113 20:32:23.899260 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dcf9ecafc35b18e23eeea0a14da7c0c3-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-0-dbcf9e2b85.novalocal\" (UID: \"dcf9ecafc35b18e23eeea0a14da7c0c3\") " pod="kube-system/kube-scheduler-ci-4186-1-0-0-dbcf9e2b85.novalocal" Jan 13 20:32:24.677445 kubelet[2693]: I0113 20:32:24.677370 2693 apiserver.go:52] "Watching apiserver" Jan 13 20:32:24.697439 kubelet[2693]: I0113 20:32:24.697346 2693 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:32:24.802783 kubelet[2693]: I0113 20:32:24.802276 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-0-dbcf9e2b85.novalocal" podStartSLOduration=1.8020799969999999 podStartE2EDuration="1.802079997s" podCreationTimestamp="2025-01-13 20:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:24.801580641 +0000 UTC m=+1.222679694" watchObservedRunningTime="2025-01-13 20:32:24.802079997 +0000 UTC m=+1.223179040" Jan 13 20:32:24.836613 kubelet[2693]: I0113 20:32:24.836084 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-0-dbcf9e2b85.novalocal" podStartSLOduration=1.83606077 podStartE2EDuration="1.83606077s" podCreationTimestamp="2025-01-13 20:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:24.817320406 +0000 UTC m=+1.238419419" watchObservedRunningTime="2025-01-13 20:32:24.83606077 +0000 UTC m=+1.257159763" Jan 13 20:32:25.803439 sudo[1689]: pam_unix(sudo:session): session closed for user root Jan 13 20:32:26.023246 sshd[1688]: Connection closed by 172.24.4.1 port 39940 Jan 13 20:32:26.022532 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Jan 13 20:32:26.028236 systemd[1]: sshd@6-172.24.4.69:22-172.24.4.1:39940.service: Deactivated successfully. Jan 13 20:32:26.032366 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:32:26.032733 systemd[1]: session-9.scope: Consumed 6.478s CPU time, 189.7M memory peak, 0B memory swap peak. Jan 13 20:32:26.035700 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:32:26.037600 systemd-logind[1457]: Removed session 9. Jan 13 20:32:28.200529 kubelet[2693]: I0113 20:32:28.200241 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-0-dbcf9e2b85.novalocal" podStartSLOduration=5.200211172 podStartE2EDuration="5.200211172s" podCreationTimestamp="2025-01-13 20:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:24.839991899 +0000 UTC m=+1.261090912" watchObservedRunningTime="2025-01-13 20:32:28.200211172 +0000 UTC m=+4.621310215" Jan 13 20:32:37.236838 kubelet[2693]: I0113 20:32:37.236765 2693 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:32:37.238965 containerd[1476]: time="2025-01-13T20:32:37.238226903Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:32:37.240478 kubelet[2693]: I0113 20:32:37.238568 2693 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:32:37.825883 kubelet[2693]: I0113 20:32:37.825220 2693 topology_manager.go:215] "Topology Admit Handler" podUID="fd38e34d-4a73-4f14-8ee8-c596cfdd5382" podNamespace="kube-system" podName="kube-proxy-b5szp" Jan 13 20:32:37.841799 kubelet[2693]: I0113 20:32:37.841764 2693 topology_manager.go:215] "Topology Admit Handler" podUID="596bbb4d-7889-4c50-b234-d6f54e713afd" podNamespace="kube-flannel" podName="kube-flannel-ds-zp75l" Jan 13 20:32:37.849600 systemd[1]: Created slice kubepods-besteffort-podfd38e34d_4a73_4f14_8ee8_c596cfdd5382.slice - libcontainer container kubepods-besteffort-podfd38e34d_4a73_4f14_8ee8_c596cfdd5382.slice. Jan 13 20:32:37.851706 kubelet[2693]: W0113 20:32:37.851386 2693 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186-1-0-0-dbcf9e2b85.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-0-dbcf9e2b85.novalocal' and this object Jan 13 20:32:37.851706 kubelet[2693]: E0113 20:32:37.851422 2693 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186-1-0-0-dbcf9e2b85.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-0-dbcf9e2b85.novalocal' and this object Jan 13 20:32:37.865133 systemd[1]: Created slice kubepods-burstable-pod596bbb4d_7889_4c50_b234_d6f54e713afd.slice - libcontainer container kubepods-burstable-pod596bbb4d_7889_4c50_b234_d6f54e713afd.slice. Jan 13 20:32:37.993059 kubelet[2693]: I0113 20:32:37.992381 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd38e34d-4a73-4f14-8ee8-c596cfdd5382-lib-modules\") pod \"kube-proxy-b5szp\" (UID: \"fd38e34d-4a73-4f14-8ee8-c596cfdd5382\") " pod="kube-system/kube-proxy-b5szp" Jan 13 20:32:37.993059 kubelet[2693]: I0113 20:32:37.993060 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/596bbb4d-7889-4c50-b234-d6f54e713afd-xtables-lock\") pod \"kube-flannel-ds-zp75l\" (UID: \"596bbb4d-7889-4c50-b234-d6f54e713afd\") " pod="kube-flannel/kube-flannel-ds-zp75l" Jan 13 20:32:37.993286 kubelet[2693]: I0113 20:32:37.993089 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8tph\" (UniqueName: \"kubernetes.io/projected/596bbb4d-7889-4c50-b234-d6f54e713afd-kube-api-access-x8tph\") pod \"kube-flannel-ds-zp75l\" (UID: \"596bbb4d-7889-4c50-b234-d6f54e713afd\") " pod="kube-flannel/kube-flannel-ds-zp75l" Jan 13 20:32:37.993286 kubelet[2693]: I0113 20:32:37.993120 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd38e34d-4a73-4f14-8ee8-c596cfdd5382-xtables-lock\") pod \"kube-proxy-b5szp\" (UID: \"fd38e34d-4a73-4f14-8ee8-c596cfdd5382\") " pod="kube-system/kube-proxy-b5szp" Jan 13 20:32:37.993286 kubelet[2693]: I0113 20:32:37.993145 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/596bbb4d-7889-4c50-b234-d6f54e713afd-cni\") pod \"kube-flannel-ds-zp75l\" (UID: \"596bbb4d-7889-4c50-b234-d6f54e713afd\") " pod="kube-flannel/kube-flannel-ds-zp75l" Jan 13 20:32:37.993286 kubelet[2693]: I0113 20:32:37.993167 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/596bbb4d-7889-4c50-b234-d6f54e713afd-run\") pod \"kube-flannel-ds-zp75l\" (UID: \"596bbb4d-7889-4c50-b234-d6f54e713afd\") " pod="kube-flannel/kube-flannel-ds-zp75l" Jan 13 20:32:37.993286 kubelet[2693]: I0113 20:32:37.993190 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/596bbb4d-7889-4c50-b234-d6f54e713afd-cni-plugin\") pod \"kube-flannel-ds-zp75l\" (UID: \"596bbb4d-7889-4c50-b234-d6f54e713afd\") " pod="kube-flannel/kube-flannel-ds-zp75l" Jan 13 20:32:37.993536 kubelet[2693]: I0113 20:32:37.993213 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/596bbb4d-7889-4c50-b234-d6f54e713afd-flannel-cfg\") pod \"kube-flannel-ds-zp75l\" (UID: \"596bbb4d-7889-4c50-b234-d6f54e713afd\") " pod="kube-flannel/kube-flannel-ds-zp75l" Jan 13 20:32:37.993536 kubelet[2693]: I0113 20:32:37.993231 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd38e34d-4a73-4f14-8ee8-c596cfdd5382-kube-proxy\") pod \"kube-proxy-b5szp\" (UID: \"fd38e34d-4a73-4f14-8ee8-c596cfdd5382\") " pod="kube-system/kube-proxy-b5szp" Jan 13 20:32:37.993536 kubelet[2693]: I0113 20:32:37.993254 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8ngq\" (UniqueName: \"kubernetes.io/projected/fd38e34d-4a73-4f14-8ee8-c596cfdd5382-kube-api-access-w8ngq\") pod \"kube-proxy-b5szp\" (UID: \"fd38e34d-4a73-4f14-8ee8-c596cfdd5382\") " pod="kube-system/kube-proxy-b5szp" Jan 13 20:32:38.170308 containerd[1476]: time="2025-01-13T20:32:38.170249042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zp75l,Uid:596bbb4d-7889-4c50-b234-d6f54e713afd,Namespace:kube-flannel,Attempt:0,}" Jan 13 20:32:38.216545 containerd[1476]: time="2025-01-13T20:32:38.216148960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:32:38.216545 containerd[1476]: time="2025-01-13T20:32:38.216270595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:32:38.216545 containerd[1476]: time="2025-01-13T20:32:38.216350239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:38.216545 containerd[1476]: time="2025-01-13T20:32:38.216457355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:38.245002 systemd[1]: Started cri-containerd-83efa1bbabecc46ce4c96470cedf79fa48079c43244ba941d70c87554a73eb3e.scope - libcontainer container 83efa1bbabecc46ce4c96470cedf79fa48079c43244ba941d70c87554a73eb3e. Jan 13 20:32:38.299679 containerd[1476]: time="2025-01-13T20:32:38.298772329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zp75l,Uid:596bbb4d-7889-4c50-b234-d6f54e713afd,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"83efa1bbabecc46ce4c96470cedf79fa48079c43244ba941d70c87554a73eb3e\"" Jan 13 20:32:38.304968 containerd[1476]: time="2025-01-13T20:32:38.304926589Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 13 20:32:39.060561 containerd[1476]: time="2025-01-13T20:32:39.060479647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5szp,Uid:fd38e34d-4a73-4f14-8ee8-c596cfdd5382,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:39.114841 containerd[1476]: time="2025-01-13T20:32:39.112148172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:32:39.115337 containerd[1476]: time="2025-01-13T20:32:39.115229622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:32:39.126060 containerd[1476]: time="2025-01-13T20:32:39.115311429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:39.126060 containerd[1476]: time="2025-01-13T20:32:39.115550090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:39.175003 systemd[1]: Started cri-containerd-280ac79a17b49b3133569d5b04f3e75fd565e234669c046ee6dd62983526d2e5.scope - libcontainer container 280ac79a17b49b3133569d5b04f3e75fd565e234669c046ee6dd62983526d2e5. Jan 13 20:32:39.199466 containerd[1476]: time="2025-01-13T20:32:39.199388876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5szp,Uid:fd38e34d-4a73-4f14-8ee8-c596cfdd5382,Namespace:kube-system,Attempt:0,} returns sandbox id \"280ac79a17b49b3133569d5b04f3e75fd565e234669c046ee6dd62983526d2e5\"" Jan 13 20:32:39.202288 containerd[1476]: time="2025-01-13T20:32:39.202251444Z" level=info msg="CreateContainer within sandbox \"280ac79a17b49b3133569d5b04f3e75fd565e234669c046ee6dd62983526d2e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:32:39.228475 containerd[1476]: time="2025-01-13T20:32:39.228337587Z" level=info msg="CreateContainer within sandbox \"280ac79a17b49b3133569d5b04f3e75fd565e234669c046ee6dd62983526d2e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c9919ebef5bec62bfe76eefedbbef6ace526b94ce95c16897655a020ffa7e8fb\"" Jan 13 20:32:39.230866 containerd[1476]: time="2025-01-13T20:32:39.229893436Z" level=info msg="StartContainer for \"c9919ebef5bec62bfe76eefedbbef6ace526b94ce95c16897655a020ffa7e8fb\"" Jan 13 20:32:39.261117 systemd[1]: Started cri-containerd-c9919ebef5bec62bfe76eefedbbef6ace526b94ce95c16897655a020ffa7e8fb.scope - libcontainer container c9919ebef5bec62bfe76eefedbbef6ace526b94ce95c16897655a020ffa7e8fb. Jan 13 20:32:39.296629 containerd[1476]: time="2025-01-13T20:32:39.296569600Z" level=info msg="StartContainer for \"c9919ebef5bec62bfe76eefedbbef6ace526b94ce95c16897655a020ffa7e8fb\" returns successfully" Jan 13 20:32:39.843184 kubelet[2693]: I0113 20:32:39.842733 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b5szp" podStartSLOduration=2.842695654 podStartE2EDuration="2.842695654s" podCreationTimestamp="2025-01-13 20:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:39.842591374 +0000 UTC m=+16.263690417" watchObservedRunningTime="2025-01-13 20:32:39.842695654 +0000 UTC m=+16.263794697" Jan 13 20:32:40.502801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19542201.mount: Deactivated successfully. Jan 13 20:32:40.564928 containerd[1476]: time="2025-01-13T20:32:40.564846931Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:40.566294 containerd[1476]: time="2025-01-13T20:32:40.566238101Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" Jan 13 20:32:40.567698 containerd[1476]: time="2025-01-13T20:32:40.567640032Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:40.571486 containerd[1476]: time="2025-01-13T20:32:40.570512547Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:40.571486 containerd[1476]: time="2025-01-13T20:32:40.571339630Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.266373615s" Jan 13 20:32:40.571486 containerd[1476]: time="2025-01-13T20:32:40.571371882Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 13 20:32:40.574671 containerd[1476]: time="2025-01-13T20:32:40.574587587Z" level=info msg="CreateContainer within sandbox \"83efa1bbabecc46ce4c96470cedf79fa48079c43244ba941d70c87554a73eb3e\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 13 20:32:40.592096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4178270950.mount: Deactivated successfully. Jan 13 20:32:40.594355 containerd[1476]: time="2025-01-13T20:32:40.594242702Z" level=info msg="CreateContainer within sandbox \"83efa1bbabecc46ce4c96470cedf79fa48079c43244ba941d70c87554a73eb3e\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"343ef3d1969dda52615e8445baed3474e4d20e96cd81fb55f23cc3183dc6665b\"" Jan 13 20:32:40.595469 containerd[1476]: time="2025-01-13T20:32:40.595194646Z" level=info msg="StartContainer for \"343ef3d1969dda52615e8445baed3474e4d20e96cd81fb55f23cc3183dc6665b\"" Jan 13 20:32:40.620956 systemd[1]: Started cri-containerd-343ef3d1969dda52615e8445baed3474e4d20e96cd81fb55f23cc3183dc6665b.scope - libcontainer container 343ef3d1969dda52615e8445baed3474e4d20e96cd81fb55f23cc3183dc6665b. Jan 13 20:32:40.648022 systemd[1]: cri-containerd-343ef3d1969dda52615e8445baed3474e4d20e96cd81fb55f23cc3183dc6665b.scope: Deactivated successfully. Jan 13 20:32:40.653392 containerd[1476]: time="2025-01-13T20:32:40.653348210Z" level=info msg="StartContainer for \"343ef3d1969dda52615e8445baed3474e4d20e96cd81fb55f23cc3183dc6665b\" returns successfully" Jan 13 20:32:40.776787 containerd[1476]: time="2025-01-13T20:32:40.776219732Z" level=info msg="shim disconnected" id=343ef3d1969dda52615e8445baed3474e4d20e96cd81fb55f23cc3183dc6665b namespace=k8s.io Jan 13 20:32:40.776787 containerd[1476]: time="2025-01-13T20:32:40.776343210Z" level=warning msg="cleaning up after shim disconnected" id=343ef3d1969dda52615e8445baed3474e4d20e96cd81fb55f23cc3183dc6665b namespace=k8s.io Jan 13 20:32:40.776787 containerd[1476]: time="2025-01-13T20:32:40.776365032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:32:40.804918 containerd[1476]: time="2025-01-13T20:32:40.803977097Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:32:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:32:40.820124 containerd[1476]: time="2025-01-13T20:32:40.820037647Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 13 20:32:43.115436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1251940806.mount: Deactivated successfully. Jan 13 20:32:44.468269 containerd[1476]: time="2025-01-13T20:32:44.468186494Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:44.470389 containerd[1476]: time="2025-01-13T20:32:44.470335546Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 13 20:32:44.471172 containerd[1476]: time="2025-01-13T20:32:44.471103853Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:44.474596 containerd[1476]: time="2025-01-13T20:32:44.474547554Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:44.476074 containerd[1476]: time="2025-01-13T20:32:44.475865068Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.655750322s" Jan 13 20:32:44.476074 containerd[1476]: time="2025-01-13T20:32:44.475900215Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 13 20:32:44.479939 containerd[1476]: time="2025-01-13T20:32:44.479897341Z" level=info msg="CreateContainer within sandbox \"83efa1bbabecc46ce4c96470cedf79fa48079c43244ba941d70c87554a73eb3e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:32:44.501143 containerd[1476]: time="2025-01-13T20:32:44.501073889Z" level=info msg="CreateContainer within sandbox \"83efa1bbabecc46ce4c96470cedf79fa48079c43244ba941d70c87554a73eb3e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6f129fa3ebfdae932e28c5e6a467ce7cd6266276814b09ebed6a77948ce3ad89\"" Jan 13 20:32:44.501898 containerd[1476]: time="2025-01-13T20:32:44.501740321Z" level=info msg="StartContainer for \"6f129fa3ebfdae932e28c5e6a467ce7cd6266276814b09ebed6a77948ce3ad89\"" Jan 13 20:32:44.548092 systemd[1]: Started cri-containerd-6f129fa3ebfdae932e28c5e6a467ce7cd6266276814b09ebed6a77948ce3ad89.scope - libcontainer container 6f129fa3ebfdae932e28c5e6a467ce7cd6266276814b09ebed6a77948ce3ad89. Jan 13 20:32:44.584035 systemd[1]: cri-containerd-6f129fa3ebfdae932e28c5e6a467ce7cd6266276814b09ebed6a77948ce3ad89.scope: Deactivated successfully. Jan 13 20:32:44.589390 containerd[1476]: time="2025-01-13T20:32:44.589151105Z" level=info msg="StartContainer for \"6f129fa3ebfdae932e28c5e6a467ce7cd6266276814b09ebed6a77948ce3ad89\" returns successfully" Jan 13 20:32:44.617417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f129fa3ebfdae932e28c5e6a467ce7cd6266276814b09ebed6a77948ce3ad89-rootfs.mount: Deactivated successfully. Jan 13 20:32:44.640982 kubelet[2693]: I0113 20:32:44.640937 2693 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:32:44.704329 kubelet[2693]: I0113 20:32:44.704204 2693 topology_manager.go:215] "Topology Admit Handler" podUID="eae56801-a0c0-458a-8071-22aa5cf494b4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6vzsf" Jan 13 20:32:44.715718 systemd[1]: Created slice kubepods-burstable-podeae56801_a0c0_458a_8071_22aa5cf494b4.slice - libcontainer container kubepods-burstable-podeae56801_a0c0_458a_8071_22aa5cf494b4.slice. Jan 13 20:32:44.721556 kubelet[2693]: I0113 20:32:44.720310 2693 topology_manager.go:215] "Topology Admit Handler" podUID="5bac8aaf-c446-4bac-b7a7-c6fd8720d51e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b5z2s" Jan 13 20:32:44.735437 systemd[1]: Created slice kubepods-burstable-pod5bac8aaf_c446_4bac_b7a7_c6fd8720d51e.slice - libcontainer container kubepods-burstable-pod5bac8aaf_c446_4bac_b7a7_c6fd8720d51e.slice. Jan 13 20:32:44.835412 kubelet[2693]: I0113 20:32:44.835059 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bac8aaf-c446-4bac-b7a7-c6fd8720d51e-config-volume\") pod \"coredns-7db6d8ff4d-b5z2s\" (UID: \"5bac8aaf-c446-4bac-b7a7-c6fd8720d51e\") " pod="kube-system/coredns-7db6d8ff4d-b5z2s" Jan 13 20:32:44.835412 kubelet[2693]: I0113 20:32:44.835161 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eae56801-a0c0-458a-8071-22aa5cf494b4-config-volume\") pod \"coredns-7db6d8ff4d-6vzsf\" (UID: \"eae56801-a0c0-458a-8071-22aa5cf494b4\") " pod="kube-system/coredns-7db6d8ff4d-6vzsf" Jan 13 20:32:44.835412 kubelet[2693]: I0113 20:32:44.835255 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frs4f\" (UniqueName: \"kubernetes.io/projected/eae56801-a0c0-458a-8071-22aa5cf494b4-kube-api-access-frs4f\") pod \"coredns-7db6d8ff4d-6vzsf\" (UID: \"eae56801-a0c0-458a-8071-22aa5cf494b4\") " pod="kube-system/coredns-7db6d8ff4d-6vzsf" Jan 13 20:32:44.835412 kubelet[2693]: I0113 20:32:44.835313 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kd9q\" (UniqueName: \"kubernetes.io/projected/5bac8aaf-c446-4bac-b7a7-c6fd8720d51e-kube-api-access-9kd9q\") pod \"coredns-7db6d8ff4d-b5z2s\" (UID: \"5bac8aaf-c446-4bac-b7a7-c6fd8720d51e\") " pod="kube-system/coredns-7db6d8ff4d-b5z2s" Jan 13 20:32:44.947072 containerd[1476]: time="2025-01-13T20:32:44.946358525Z" level=info msg="shim disconnected" id=6f129fa3ebfdae932e28c5e6a467ce7cd6266276814b09ebed6a77948ce3ad89 namespace=k8s.io Jan 13 20:32:44.947072 containerd[1476]: time="2025-01-13T20:32:44.946471833Z" level=warning msg="cleaning up after shim disconnected" id=6f129fa3ebfdae932e28c5e6a467ce7cd6266276814b09ebed6a77948ce3ad89 namespace=k8s.io Jan 13 20:32:44.947072 containerd[1476]: time="2025-01-13T20:32:44.946496881Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:32:45.023543 containerd[1476]: time="2025-01-13T20:32:45.022415211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6vzsf,Uid:eae56801-a0c0-458a-8071-22aa5cf494b4,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:45.040324 containerd[1476]: time="2025-01-13T20:32:45.040249960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b5z2s,Uid:5bac8aaf-c446-4bac-b7a7-c6fd8720d51e,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:45.073407 containerd[1476]: time="2025-01-13T20:32:45.073101100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6vzsf,Uid:eae56801-a0c0-458a-8071-22aa5cf494b4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e29548933e60e8a168a43e4d02868d8e8320ece7e42ab979db754fdbe2828be\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:32:45.074138 kubelet[2693]: E0113 20:32:45.073681 2693 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e29548933e60e8a168a43e4d02868d8e8320ece7e42ab979db754fdbe2828be\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:32:45.074138 kubelet[2693]: E0113 20:32:45.073787 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e29548933e60e8a168a43e4d02868d8e8320ece7e42ab979db754fdbe2828be\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-6vzsf" Jan 13 20:32:45.074564 kubelet[2693]: E0113 20:32:45.073938 2693 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e29548933e60e8a168a43e4d02868d8e8320ece7e42ab979db754fdbe2828be\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-6vzsf" Jan 13 20:32:45.074564 kubelet[2693]: E0113 20:32:45.074300 2693 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6vzsf_kube-system(eae56801-a0c0-458a-8071-22aa5cf494b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6vzsf_kube-system(eae56801-a0c0-458a-8071-22aa5cf494b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e29548933e60e8a168a43e4d02868d8e8320ece7e42ab979db754fdbe2828be\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-6vzsf" podUID="eae56801-a0c0-458a-8071-22aa5cf494b4" Jan 13 20:32:45.079941 containerd[1476]: time="2025-01-13T20:32:45.079636542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b5z2s,Uid:5bac8aaf-c446-4bac-b7a7-c6fd8720d51e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5686694ffabe27693660eda871c2037072535038cb7980c708127c3a6e200d7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:32:45.080337 kubelet[2693]: E0113 20:32:45.080110 2693 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5686694ffabe27693660eda871c2037072535038cb7980c708127c3a6e200d7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:32:45.080337 kubelet[2693]: E0113 20:32:45.080163 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5686694ffabe27693660eda871c2037072535038cb7980c708127c3a6e200d7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-b5z2s" Jan 13 20:32:45.080337 kubelet[2693]: E0113 20:32:45.080184 2693 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5686694ffabe27693660eda871c2037072535038cb7980c708127c3a6e200d7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-b5z2s" Jan 13 20:32:45.080337 kubelet[2693]: E0113 20:32:45.080226 2693 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-b5z2s_kube-system(5bac8aaf-c446-4bac-b7a7-c6fd8720d51e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-b5z2s_kube-system(5bac8aaf-c446-4bac-b7a7-c6fd8720d51e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5686694ffabe27693660eda871c2037072535038cb7980c708127c3a6e200d7\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-b5z2s" podUID="5bac8aaf-c446-4bac-b7a7-c6fd8720d51e" Jan 13 20:32:45.852987 containerd[1476]: time="2025-01-13T20:32:45.852907375Z" level=info msg="CreateContainer within sandbox \"83efa1bbabecc46ce4c96470cedf79fa48079c43244ba941d70c87554a73eb3e\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 13 20:32:45.893382 containerd[1476]: time="2025-01-13T20:32:45.893142046Z" level=info msg="CreateContainer within sandbox \"83efa1bbabecc46ce4c96470cedf79fa48079c43244ba941d70c87554a73eb3e\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"123937f8cd1a3e9a22237b9aa13b949b0f69b1409d4f2f340194dcd15b3d0bb7\"" Jan 13 20:32:45.894307 containerd[1476]: time="2025-01-13T20:32:45.894248113Z" level=info msg="StartContainer for \"123937f8cd1a3e9a22237b9aa13b949b0f69b1409d4f2f340194dcd15b3d0bb7\"" Jan 13 20:32:45.944040 systemd[1]: Started cri-containerd-123937f8cd1a3e9a22237b9aa13b949b0f69b1409d4f2f340194dcd15b3d0bb7.scope - libcontainer container 123937f8cd1a3e9a22237b9aa13b949b0f69b1409d4f2f340194dcd15b3d0bb7. Jan 13 20:32:45.982450 containerd[1476]: time="2025-01-13T20:32:45.982400052Z" level=info msg="StartContainer for \"123937f8cd1a3e9a22237b9aa13b949b0f69b1409d4f2f340194dcd15b3d0bb7\" returns successfully" Jan 13 20:32:47.081719 systemd-networkd[1371]: flannel.1: Link UP Jan 13 20:32:47.081737 systemd-networkd[1371]: flannel.1: Gained carrier Jan 13 20:32:48.962098 systemd-networkd[1371]: flannel.1: Gained IPv6LL Jan 13 20:32:56.714078 containerd[1476]: time="2025-01-13T20:32:56.713891019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b5z2s,Uid:5bac8aaf-c446-4bac-b7a7-c6fd8720d51e,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:56.764257 systemd-networkd[1371]: cni0: Link UP Jan 13 20:32:56.764284 systemd-networkd[1371]: cni0: Gained carrier Jan 13 20:32:56.781498 systemd-networkd[1371]: cni0: Lost carrier Jan 13 20:32:56.785240 systemd-networkd[1371]: vethd0f2d3a5: Link UP Jan 13 20:32:56.793432 kernel: cni0: port 1(vethd0f2d3a5) entered blocking state Jan 13 20:32:56.793587 kernel: cni0: port 1(vethd0f2d3a5) entered disabled state Jan 13 20:32:56.793635 kernel: vethd0f2d3a5: entered allmulticast mode Jan 13 20:32:56.796397 kernel: vethd0f2d3a5: entered promiscuous mode Jan 13 20:32:56.811112 kernel: cni0: port 1(vethd0f2d3a5) entered blocking state Jan 13 20:32:56.811330 kernel: cni0: port 1(vethd0f2d3a5) entered forwarding state Jan 13 20:32:56.812015 systemd-networkd[1371]: vethd0f2d3a5: Gained carrier Jan 13 20:32:56.818736 systemd-networkd[1371]: cni0: Gained carrier Jan 13 20:32:56.819000 containerd[1476]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 13 20:32:56.819000 containerd[1476]: delegateAdd: netconf sent to delegate plugin: Jan 13 20:32:56.851329 containerd[1476]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T20:32:56.851033729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:32:56.851329 containerd[1476]: time="2025-01-13T20:32:56.851122550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:32:56.851329 containerd[1476]: time="2025-01-13T20:32:56.851137989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:56.851329 containerd[1476]: time="2025-01-13T20:32:56.851234183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:56.881053 systemd[1]: Started cri-containerd-ac410b08d8251b7600475725e3d29735512c2cc0946f1efcbf6b9b2b336a83e4.scope - libcontainer container ac410b08d8251b7600475725e3d29735512c2cc0946f1efcbf6b9b2b336a83e4. Jan 13 20:32:56.925517 containerd[1476]: time="2025-01-13T20:32:56.925404687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b5z2s,Uid:5bac8aaf-c446-4bac-b7a7-c6fd8720d51e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac410b08d8251b7600475725e3d29735512c2cc0946f1efcbf6b9b2b336a83e4\"" Jan 13 20:32:56.929842 containerd[1476]: time="2025-01-13T20:32:56.929745294Z" level=info msg="CreateContainer within sandbox \"ac410b08d8251b7600475725e3d29735512c2cc0946f1efcbf6b9b2b336a83e4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:32:56.962680 containerd[1476]: time="2025-01-13T20:32:56.962541863Z" level=info msg="CreateContainer within sandbox \"ac410b08d8251b7600475725e3d29735512c2cc0946f1efcbf6b9b2b336a83e4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a86b67859f23137e2007d58a5ab04102eb44161dddc777cf377a9f75d615d661\"" Jan 13 20:32:56.964800 containerd[1476]: time="2025-01-13T20:32:56.964094628Z" level=info msg="StartContainer for \"a86b67859f23137e2007d58a5ab04102eb44161dddc777cf377a9f75d615d661\"" Jan 13 20:32:57.008034 systemd[1]: Started cri-containerd-a86b67859f23137e2007d58a5ab04102eb44161dddc777cf377a9f75d615d661.scope - libcontainer container a86b67859f23137e2007d58a5ab04102eb44161dddc777cf377a9f75d615d661. Jan 13 20:32:57.051938 containerd[1476]: time="2025-01-13T20:32:57.051868549Z" level=info msg="StartContainer for \"a86b67859f23137e2007d58a5ab04102eb44161dddc777cf377a9f75d615d661\" returns successfully" Jan 13 20:32:57.858105 systemd-networkd[1371]: cni0: Gained IPv6LL Jan 13 20:32:57.924651 kubelet[2693]: I0113 20:32:57.923007 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-zp75l" podStartSLOduration=14.749451179 podStartE2EDuration="20.92297644s" podCreationTimestamp="2025-01-13 20:32:37 +0000 UTC" firstStartedPulling="2025-01-13 20:32:38.303530306 +0000 UTC m=+14.724629309" lastFinishedPulling="2025-01-13 20:32:44.477055567 +0000 UTC m=+20.898154570" observedRunningTime="2025-01-13 20:32:46.882769672 +0000 UTC m=+23.303868735" watchObservedRunningTime="2025-01-13 20:32:57.92297644 +0000 UTC m=+34.344075493" Jan 13 20:32:57.926216 kubelet[2693]: I0113 20:32:57.925731 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b5z2s" podStartSLOduration=19.92571195 podStartE2EDuration="19.92571195s" podCreationTimestamp="2025-01-13 20:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:57.922342105 +0000 UTC m=+34.343441158" watchObservedRunningTime="2025-01-13 20:32:57.92571195 +0000 UTC m=+34.346811003" Jan 13 20:32:58.370110 systemd-networkd[1371]: vethd0f2d3a5: Gained IPv6LL Jan 13 20:33:00.714024 containerd[1476]: time="2025-01-13T20:33:00.713893884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6vzsf,Uid:eae56801-a0c0-458a-8071-22aa5cf494b4,Namespace:kube-system,Attempt:0,}" Jan 13 20:33:00.778590 systemd-networkd[1371]: vetha6a7fec3: Link UP Jan 13 20:33:00.785347 kernel: cni0: port 2(vetha6a7fec3) entered blocking state Jan 13 20:33:00.785469 kernel: cni0: port 2(vetha6a7fec3) entered disabled state Jan 13 20:33:00.785558 kernel: vetha6a7fec3: entered allmulticast mode Jan 13 20:33:00.790948 kernel: vetha6a7fec3: entered promiscuous mode Jan 13 20:33:00.804664 kernel: cni0: port 2(vetha6a7fec3) entered blocking state Jan 13 20:33:00.804781 kernel: cni0: port 2(vetha6a7fec3) entered forwarding state Jan 13 20:33:00.803469 systemd-networkd[1371]: vetha6a7fec3: Gained carrier Jan 13 20:33:00.808760 containerd[1476]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009c8e8), "name":"cbr0", "type":"bridge"} Jan 13 20:33:00.808760 containerd[1476]: delegateAdd: netconf sent to delegate plugin: Jan 13 20:33:00.842870 containerd[1476]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T20:33:00.842060245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:33:00.842870 containerd[1476]: time="2025-01-13T20:33:00.842118602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:33:00.842870 containerd[1476]: time="2025-01-13T20:33:00.842134120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:33:00.842870 containerd[1476]: time="2025-01-13T20:33:00.842215058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:33:00.872959 systemd[1]: Started cri-containerd-aed7815ab4e4bd3c30a19c3c85dc296330604c4b9a037dd22bd5ef49e57cf629.scope - libcontainer container aed7815ab4e4bd3c30a19c3c85dc296330604c4b9a037dd22bd5ef49e57cf629. Jan 13 20:33:00.912573 containerd[1476]: time="2025-01-13T20:33:00.912523983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6vzsf,Uid:eae56801-a0c0-458a-8071-22aa5cf494b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"aed7815ab4e4bd3c30a19c3c85dc296330604c4b9a037dd22bd5ef49e57cf629\"" Jan 13 20:33:00.917381 containerd[1476]: time="2025-01-13T20:33:00.917344072Z" level=info msg="CreateContainer within sandbox \"aed7815ab4e4bd3c30a19c3c85dc296330604c4b9a037dd22bd5ef49e57cf629\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:33:00.935917 containerd[1476]: time="2025-01-13T20:33:00.935733213Z" level=info msg="CreateContainer within sandbox \"aed7815ab4e4bd3c30a19c3c85dc296330604c4b9a037dd22bd5ef49e57cf629\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"415806d6cfba72fb135d38c51e95c19896278c52eb0bcdd1149ce9dae4c25944\"" Jan 13 20:33:00.936614 containerd[1476]: time="2025-01-13T20:33:00.936560735Z" level=info msg="StartContainer for \"415806d6cfba72fb135d38c51e95c19896278c52eb0bcdd1149ce9dae4c25944\"" Jan 13 20:33:00.978960 systemd[1]: Started cri-containerd-415806d6cfba72fb135d38c51e95c19896278c52eb0bcdd1149ce9dae4c25944.scope - libcontainer container 415806d6cfba72fb135d38c51e95c19896278c52eb0bcdd1149ce9dae4c25944. Jan 13 20:33:01.014104 containerd[1476]: time="2025-01-13T20:33:01.013942326Z" level=info msg="StartContainer for \"415806d6cfba72fb135d38c51e95c19896278c52eb0bcdd1149ce9dae4c25944\" returns successfully" Jan 13 20:33:01.739398 systemd[1]: run-containerd-runc-k8s.io-aed7815ab4e4bd3c30a19c3c85dc296330604c4b9a037dd22bd5ef49e57cf629-runc.ys0prO.mount: Deactivated successfully. Jan 13 20:33:01.944524 kubelet[2693]: I0113 20:33:01.941958 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6vzsf" podStartSLOduration=23.941776195 podStartE2EDuration="23.941776195s" podCreationTimestamp="2025-01-13 20:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:33:01.941301167 +0000 UTC m=+38.362400210" watchObservedRunningTime="2025-01-13 20:33:01.941776195 +0000 UTC m=+38.362875239" Jan 13 20:33:02.786178 systemd-networkd[1371]: vetha6a7fec3: Gained IPv6LL Jan 13 20:33:37.425511 systemd[1]: Started sshd@7-172.24.4.69:22-172.24.4.1:32940.service - OpenSSH per-connection server daemon (172.24.4.1:32940). Jan 13 20:33:38.512224 sshd[3734]: Accepted publickey for core from 172.24.4.1 port 32940 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:38.516085 sshd-session[3734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:38.528190 systemd-logind[1457]: New session 10 of user core. Jan 13 20:33:38.536137 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:33:39.185407 sshd[3757]: Connection closed by 172.24.4.1 port 32940 Jan 13 20:33:39.188125 sshd-session[3734]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:39.193517 systemd[1]: sshd@7-172.24.4.69:22-172.24.4.1:32940.service: Deactivated successfully. Jan 13 20:33:39.197711 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:33:39.202427 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:33:39.205659 systemd-logind[1457]: Removed session 10. Jan 13 20:33:44.215412 systemd[1]: Started sshd@8-172.24.4.69:22-172.24.4.1:44800.service - OpenSSH per-connection server daemon (172.24.4.1:44800). Jan 13 20:33:45.585211 sshd[3792]: Accepted publickey for core from 172.24.4.1 port 44800 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:45.588369 sshd-session[3792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:45.602267 systemd-logind[1457]: New session 11 of user core. Jan 13 20:33:45.610212 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:33:46.259331 sshd[3794]: Connection closed by 172.24.4.1 port 44800 Jan 13 20:33:46.260311 sshd-session[3792]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:46.267425 systemd[1]: sshd@8-172.24.4.69:22-172.24.4.1:44800.service: Deactivated successfully. Jan 13 20:33:46.271169 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:33:46.272786 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:33:46.276064 systemd-logind[1457]: Removed session 11. Jan 13 20:33:51.284066 systemd[1]: Started sshd@9-172.24.4.69:22-172.24.4.1:44816.service - OpenSSH per-connection server daemon (172.24.4.1:44816). Jan 13 20:33:52.488044 sshd[3826]: Accepted publickey for core from 172.24.4.1 port 44816 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:52.490730 sshd-session[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:52.500642 systemd-logind[1457]: New session 12 of user core. Jan 13 20:33:52.509135 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:33:53.216166 sshd[3832]: Connection closed by 172.24.4.1 port 44816 Jan 13 20:33:53.217900 sshd-session[3826]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:53.224787 systemd[1]: sshd@9-172.24.4.69:22-172.24.4.1:44816.service: Deactivated successfully. Jan 13 20:33:53.226629 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:33:53.229260 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:33:53.236450 systemd[1]: Started sshd@10-172.24.4.69:22-172.24.4.1:44818.service - OpenSSH per-connection server daemon (172.24.4.1:44818). Jan 13 20:33:53.240424 systemd-logind[1457]: Removed session 12. Jan 13 20:33:54.620955 sshd[3861]: Accepted publickey for core from 172.24.4.1 port 44818 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:54.623690 sshd-session[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:54.633286 systemd-logind[1457]: New session 13 of user core. Jan 13 20:33:54.641101 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:33:55.609339 sshd[3863]: Connection closed by 172.24.4.1 port 44818 Jan 13 20:33:55.610070 sshd-session[3861]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:55.621158 systemd[1]: sshd@10-172.24.4.69:22-172.24.4.1:44818.service: Deactivated successfully. Jan 13 20:33:55.627337 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:33:55.631138 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:33:55.638372 systemd[1]: Started sshd@11-172.24.4.69:22-172.24.4.1:52410.service - OpenSSH per-connection server daemon (172.24.4.1:52410). Jan 13 20:33:55.642020 systemd-logind[1457]: Removed session 13. Jan 13 20:33:57.225326 sshd[3872]: Accepted publickey for core from 172.24.4.1 port 52410 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:57.228420 sshd-session[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:57.241971 systemd-logind[1457]: New session 14 of user core. Jan 13 20:33:57.247716 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:33:58.074854 sshd[3875]: Connection closed by 172.24.4.1 port 52410 Jan 13 20:33:58.075984 sshd-session[3872]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:58.083359 systemd[1]: sshd@11-172.24.4.69:22-172.24.4.1:52410.service: Deactivated successfully. Jan 13 20:33:58.088562 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:33:58.090693 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:33:58.093433 systemd-logind[1457]: Removed session 14. Jan 13 20:34:03.095285 systemd[1]: Started sshd@12-172.24.4.69:22-172.24.4.1:52420.service - OpenSSH per-connection server daemon (172.24.4.1:52420). Jan 13 20:34:04.437202 sshd[3927]: Accepted publickey for core from 172.24.4.1 port 52420 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:04.439928 sshd-session[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:04.450029 systemd-logind[1457]: New session 15 of user core. Jan 13 20:34:04.459639 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:34:05.128039 sshd[3929]: Connection closed by 172.24.4.1 port 52420 Jan 13 20:34:05.128505 sshd-session[3927]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:05.137747 systemd[1]: sshd@12-172.24.4.69:22-172.24.4.1:52420.service: Deactivated successfully. Jan 13 20:34:05.140397 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:34:05.142356 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:34:05.152363 systemd[1]: Started sshd@13-172.24.4.69:22-172.24.4.1:57770.service - OpenSSH per-connection server daemon (172.24.4.1:57770). Jan 13 20:34:05.154379 systemd-logind[1457]: Removed session 15. Jan 13 20:34:06.436969 sshd[3940]: Accepted publickey for core from 172.24.4.1 port 57770 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:06.440059 sshd-session[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:06.452923 systemd-logind[1457]: New session 16 of user core. Jan 13 20:34:06.461218 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:34:07.324862 sshd[3942]: Connection closed by 172.24.4.1 port 57770 Jan 13 20:34:07.326729 sshd-session[3940]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:07.339133 systemd[1]: sshd@13-172.24.4.69:22-172.24.4.1:57770.service: Deactivated successfully. Jan 13 20:34:07.342673 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:34:07.347166 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:34:07.357141 systemd[1]: Started sshd@14-172.24.4.69:22-172.24.4.1:57782.service - OpenSSH per-connection server daemon (172.24.4.1:57782). Jan 13 20:34:07.360432 systemd-logind[1457]: Removed session 16. Jan 13 20:34:08.817478 sshd[3950]: Accepted publickey for core from 172.24.4.1 port 57782 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:08.820372 sshd-session[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:08.835577 systemd-logind[1457]: New session 17 of user core. Jan 13 20:34:08.841152 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:34:11.356304 sshd[3973]: Connection closed by 172.24.4.1 port 57782 Jan 13 20:34:11.358055 sshd-session[3950]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:11.370348 systemd[1]: sshd@14-172.24.4.69:22-172.24.4.1:57782.service: Deactivated successfully. Jan 13 20:34:11.375058 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:34:11.379528 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:34:11.389432 systemd[1]: Started sshd@15-172.24.4.69:22-172.24.4.1:57788.service - OpenSSH per-connection server daemon (172.24.4.1:57788). Jan 13 20:34:11.391634 systemd-logind[1457]: Removed session 17. Jan 13 20:34:12.645887 sshd[3992]: Accepted publickey for core from 172.24.4.1 port 57788 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:12.648922 sshd-session[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:12.660711 systemd-logind[1457]: New session 18 of user core. Jan 13 20:34:12.665209 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:34:13.925430 sshd[4000]: Connection closed by 172.24.4.1 port 57788 Jan 13 20:34:13.927885 sshd-session[3992]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:13.939033 systemd[1]: sshd@15-172.24.4.69:22-172.24.4.1:57788.service: Deactivated successfully. Jan 13 20:34:13.943158 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:34:13.947241 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:34:13.954518 systemd[1]: Started sshd@16-172.24.4.69:22-172.24.4.1:39900.service - OpenSSH per-connection server daemon (172.24.4.1:39900). Jan 13 20:34:13.958476 systemd-logind[1457]: Removed session 18. Jan 13 20:34:15.481074 sshd[4024]: Accepted publickey for core from 172.24.4.1 port 39900 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:15.484412 sshd-session[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:15.495944 systemd-logind[1457]: New session 19 of user core. Jan 13 20:34:15.510184 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:34:16.181388 sshd[4026]: Connection closed by 172.24.4.1 port 39900 Jan 13 20:34:16.183216 sshd-session[4024]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:16.189859 systemd[1]: sshd@16-172.24.4.69:22-172.24.4.1:39900.service: Deactivated successfully. Jan 13 20:34:16.193543 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:34:16.195698 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:34:16.198713 systemd-logind[1457]: Removed session 19. Jan 13 20:34:21.200344 systemd[1]: Started sshd@17-172.24.4.69:22-172.24.4.1:39912.service - OpenSSH per-connection server daemon (172.24.4.1:39912). Jan 13 20:34:22.476413 sshd[4061]: Accepted publickey for core from 172.24.4.1 port 39912 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:22.479162 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:22.496806 systemd-logind[1457]: New session 20 of user core. Jan 13 20:34:22.502354 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:34:23.216939 sshd[4063]: Connection closed by 172.24.4.1 port 39912 Jan 13 20:34:23.218089 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:23.223863 systemd[1]: sshd@17-172.24.4.69:22-172.24.4.1:39912.service: Deactivated successfully. Jan 13 20:34:23.227596 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:34:23.244444 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:34:23.247207 systemd-logind[1457]: Removed session 20. Jan 13 20:34:28.244348 systemd[1]: Started sshd@18-172.24.4.69:22-172.24.4.1:54594.service - OpenSSH per-connection server daemon (172.24.4.1:54594). Jan 13 20:34:29.476945 sshd[4118]: Accepted publickey for core from 172.24.4.1 port 54594 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:29.480032 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:29.490999 systemd-logind[1457]: New session 21 of user core. Jan 13 20:34:29.498251 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:34:30.110601 sshd[4120]: Connection closed by 172.24.4.1 port 54594 Jan 13 20:34:30.111779 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:30.119371 systemd[1]: sshd@18-172.24.4.69:22-172.24.4.1:54594.service: Deactivated successfully. Jan 13 20:34:30.125859 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:34:30.128002 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:34:30.130933 systemd-logind[1457]: Removed session 21. Jan 13 20:34:35.142349 systemd[1]: Started sshd@19-172.24.4.69:22-172.24.4.1:47100.service - OpenSSH per-connection server daemon (172.24.4.1:47100). Jan 13 20:34:36.426341 sshd[4152]: Accepted publickey for core from 172.24.4.1 port 47100 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:36.429521 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:36.439761 systemd-logind[1457]: New session 22 of user core. Jan 13 20:34:36.448152 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:34:37.217045 sshd[4154]: Connection closed by 172.24.4.1 port 47100 Jan 13 20:34:37.218344 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:37.225871 systemd[1]: sshd@19-172.24.4.69:22-172.24.4.1:47100.service: Deactivated successfully. Jan 13 20:34:37.231112 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:34:37.234209 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:34:37.236910 systemd-logind[1457]: Removed session 22.