Jan 13 21:59:18.077585 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:59:18.077612 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:59:18.077622 kernel: BIOS-provided physical RAM map: Jan 13 21:59:18.077630 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:59:18.077638 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:59:18.077648 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:59:18.077658 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 13 21:59:18.077666 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 13 21:59:18.077692 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:59:18.077700 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:59:18.077721 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 13 21:59:18.077730 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:59:18.077738 kernel: NX (Execute Disable) protection: active Jan 13 21:59:18.077746 kernel: APIC: Static calls initialized Jan 13 21:59:18.077758 kernel: SMBIOS 3.0.0 present. Jan 13 21:59:18.077767 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 13 21:59:18.077775 kernel: Hypervisor detected: KVM Jan 13 21:59:18.077783 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:59:18.077791 kernel: kvm-clock: using sched offset of 3453244404 cycles Jan 13 21:59:18.077802 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:59:18.077811 kernel: tsc: Detected 1996.249 MHz processor Jan 13 21:59:18.077819 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:59:18.077828 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:59:18.077837 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 13 21:59:18.077845 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:59:18.077853 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:59:18.077862 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 13 21:59:18.077870 kernel: ACPI: Early table checksum verification disabled Jan 13 21:59:18.077880 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 13 21:59:18.077889 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:59:18.077897 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:59:18.077905 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:59:18.077914 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 13 21:59:18.077922 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:59:18.077930 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:59:18.077938 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 13 21:59:18.077947 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 13 21:59:18.077957 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 13 21:59:18.077965 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 13 21:59:18.077974 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 13 21:59:18.077985 kernel: No NUMA configuration found Jan 13 21:59:18.077994 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 13 21:59:18.078003 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] Jan 13 21:59:18.078013 kernel: Zone ranges: Jan 13 21:59:18.078022 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:59:18.078031 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:59:18.078039 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 13 21:59:18.078048 kernel: Movable zone start for each node Jan 13 21:59:18.078056 kernel: Early memory node ranges Jan 13 21:59:18.078065 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:59:18.078074 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 13 21:59:18.078084 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 13 21:59:18.078093 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 13 21:59:18.078102 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:59:18.078110 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:59:18.078119 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 13 21:59:18.078128 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:59:18.078137 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:59:18.078145 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:59:18.078154 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:59:18.078164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:59:18.078173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:59:18.078182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:59:18.078190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:59:18.078199 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:59:18.078208 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:59:18.078217 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:59:18.078225 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 13 21:59:18.078234 kernel: Booting paravirtualized kernel on KVM Jan 13 21:59:18.078245 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:59:18.078254 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:59:18.078262 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:59:18.078271 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:59:18.078279 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:59:18.078288 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 13 21:59:18.078298 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:59:18.078308 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:59:18.078318 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:59:18.078327 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:59:18.078336 kernel: Fallback order for Node 0: 0 Jan 13 21:59:18.078345 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 13 21:59:18.078353 kernel: Policy zone: Normal Jan 13 21:59:18.078362 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:59:18.078370 kernel: software IO TLB: area num 2. Jan 13 21:59:18.078380 kernel: Memory: 3966216K/4193772K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 227296K reserved, 0K cma-reserved) Jan 13 21:59:18.078388 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:59:18.078399 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:59:18.078407 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:59:18.078416 kernel: Dynamic Preempt: voluntary Jan 13 21:59:18.078425 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:59:18.078434 kernel: rcu: RCU event tracing is enabled. Jan 13 21:59:18.078443 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:59:18.078452 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:59:18.078460 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:59:18.078469 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:59:18.078480 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:59:18.078489 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:59:18.078497 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:59:18.078506 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:59:18.078515 kernel: Console: colour VGA+ 80x25 Jan 13 21:59:18.078523 kernel: printk: console [tty0] enabled Jan 13 21:59:18.078532 kernel: printk: console [ttyS0] enabled Jan 13 21:59:18.078541 kernel: ACPI: Core revision 20230628 Jan 13 21:59:18.078550 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:59:18.078560 kernel: x2apic enabled Jan 13 21:59:18.078569 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:59:18.078578 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:59:18.078586 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:59:18.078595 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 13 21:59:18.078604 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 21:59:18.078613 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 21:59:18.078621 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:59:18.078630 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:59:18.078641 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:59:18.078649 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:59:18.078658 kernel: Speculative Store Bypass: Vulnerable Jan 13 21:59:18.078666 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 13 21:59:18.079723 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:59:18.079743 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:59:18.079753 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:59:18.079762 kernel: landlock: Up and running. Jan 13 21:59:18.079770 kernel: SELinux: Initializing. Jan 13 21:59:18.079779 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:59:18.079788 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:59:18.079797 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 13 21:59:18.079808 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:59:18.079817 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:59:18.079826 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:59:18.079834 kernel: Performance Events: AMD PMU driver. Jan 13 21:59:18.079843 kernel: ... version: 0 Jan 13 21:59:18.079853 kernel: ... bit width: 48 Jan 13 21:59:18.079862 kernel: ... generic registers: 4 Jan 13 21:59:18.079870 kernel: ... value mask: 0000ffffffffffff Jan 13 21:59:18.079879 kernel: ... max period: 00007fffffffffff Jan 13 21:59:18.079887 kernel: ... fixed-purpose events: 0 Jan 13 21:59:18.079896 kernel: ... event mask: 000000000000000f Jan 13 21:59:18.079905 kernel: signal: max sigframe size: 1440 Jan 13 21:59:18.079913 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:59:18.079923 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:59:18.079933 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:59:18.079941 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:59:18.079950 kernel: .... node #0, CPUs: #1 Jan 13 21:59:18.079959 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:59:18.079967 kernel: smpboot: Max logical packages: 2 Jan 13 21:59:18.079976 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 13 21:59:18.079985 kernel: devtmpfs: initialized Jan 13 21:59:18.079993 kernel: x86/mm: Memory block size: 128MB Jan 13 21:59:18.080002 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:59:18.080012 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:59:18.080021 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:59:18.080030 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:59:18.080038 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:59:18.080047 kernel: audit: type=2000 audit(1736805556.911:1): state=initialized audit_enabled=0 res=1 Jan 13 21:59:18.080055 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:59:18.080064 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:59:18.080073 kernel: cpuidle: using governor menu Jan 13 21:59:18.080081 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:59:18.080092 kernel: dca service started, version 1.12.1 Jan 13 21:59:18.080100 kernel: PCI: Using configuration type 1 for base access Jan 13 21:59:18.080109 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:59:18.080118 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:59:18.080126 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:59:18.080135 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:59:18.080144 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:59:18.080152 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:59:18.080161 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:59:18.080171 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:59:18.080180 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:59:18.080188 kernel: ACPI: Interpreter enabled Jan 13 21:59:18.080197 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:59:18.080206 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:59:18.080214 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:59:18.080223 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:59:18.080232 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 13 21:59:18.080240 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:59:18.080376 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:59:18.080477 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:59:18.080570 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:59:18.080583 kernel: acpiphp: Slot [3] registered Jan 13 21:59:18.080592 kernel: acpiphp: Slot [4] registered Jan 13 21:59:18.080601 kernel: acpiphp: Slot [5] registered Jan 13 21:59:18.080609 kernel: acpiphp: Slot [6] registered Jan 13 21:59:18.080621 kernel: acpiphp: Slot [7] registered Jan 13 21:59:18.080630 kernel: acpiphp: Slot [8] registered Jan 13 21:59:18.080638 kernel: acpiphp: Slot [9] registered Jan 13 21:59:18.080646 kernel: acpiphp: Slot [10] registered Jan 13 21:59:18.080655 kernel: acpiphp: Slot [11] registered Jan 13 21:59:18.080663 kernel: acpiphp: Slot [12] registered Jan 13 21:59:18.081721 kernel: acpiphp: Slot [13] registered Jan 13 21:59:18.081734 kernel: acpiphp: Slot [14] registered Jan 13 21:59:18.081743 kernel: acpiphp: Slot [15] registered Jan 13 21:59:18.081752 kernel: acpiphp: Slot [16] registered Jan 13 21:59:18.081764 kernel: acpiphp: Slot [17] registered Jan 13 21:59:18.081772 kernel: acpiphp: Slot [18] registered Jan 13 21:59:18.081781 kernel: acpiphp: Slot [19] registered Jan 13 21:59:18.081789 kernel: acpiphp: Slot [20] registered Jan 13 21:59:18.081797 kernel: acpiphp: Slot [21] registered Jan 13 21:59:18.081806 kernel: acpiphp: Slot [22] registered Jan 13 21:59:18.081814 kernel: acpiphp: Slot [23] registered Jan 13 21:59:18.081823 kernel: acpiphp: Slot [24] registered Jan 13 21:59:18.081831 kernel: acpiphp: Slot [25] registered Jan 13 21:59:18.081841 kernel: acpiphp: Slot [26] registered Jan 13 21:59:18.081850 kernel: acpiphp: Slot [27] registered Jan 13 21:59:18.081858 kernel: acpiphp: Slot [28] registered Jan 13 21:59:18.081867 kernel: acpiphp: Slot [29] registered Jan 13 21:59:18.081875 kernel: acpiphp: Slot [30] registered Jan 13 21:59:18.081884 kernel: acpiphp: Slot [31] registered Jan 13 21:59:18.081892 kernel: PCI host bridge to bus 0000:00 Jan 13 21:59:18.081997 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:59:18.082083 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:59:18.082171 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:59:18.082252 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:59:18.082333 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 13 21:59:18.082413 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:59:18.082522 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:59:18.082624 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 21:59:18.082765 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 13 21:59:18.082861 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 13 21:59:18.082953 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 21:59:18.083048 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 21:59:18.083139 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 21:59:18.083229 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 21:59:18.083327 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:59:18.083425 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 13 21:59:18.083516 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 13 21:59:18.083614 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 13 21:59:18.084766 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 13 21:59:18.084865 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 13 21:59:18.084958 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 13 21:59:18.085104 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 13 21:59:18.085199 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:59:18.085301 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:59:18.085395 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 13 21:59:18.085488 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 13 21:59:18.085580 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 13 21:59:18.085712 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 13 21:59:18.085838 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:59:18.085930 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:59:18.086020 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 13 21:59:18.086111 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 13 21:59:18.086209 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 13 21:59:18.086301 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 13 21:59:18.086392 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 13 21:59:18.086499 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:59:18.086591 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 13 21:59:18.090727 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 13 21:59:18.090840 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 13 21:59:18.090855 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:59:18.090864 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:59:18.090874 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:59:18.090883 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:59:18.090897 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:59:18.090906 kernel: iommu: Default domain type: Translated Jan 13 21:59:18.090917 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:59:18.090926 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:59:18.090935 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:59:18.090943 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:59:18.090952 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 13 21:59:18.091042 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 13 21:59:18.091132 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 13 21:59:18.091226 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:59:18.091239 kernel: vgaarb: loaded Jan 13 21:59:18.091248 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:59:18.091257 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:59:18.091265 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:59:18.091274 kernel: pnp: PnP ACPI init Jan 13 21:59:18.091362 kernel: pnp 00:03: [dma 2] Jan 13 21:59:18.091376 kernel: pnp: PnP ACPI: found 5 devices Jan 13 21:59:18.091388 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:59:18.091397 kernel: NET: Registered PF_INET protocol family Jan 13 21:59:18.091406 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:59:18.091414 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:59:18.091423 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:59:18.091432 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:59:18.091441 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:59:18.091449 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:59:18.091458 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:59:18.091469 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:59:18.091477 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:59:18.091486 kernel: NET: Registered PF_XDP protocol family Jan 13 21:59:18.091566 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:59:18.091644 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:59:18.091742 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:59:18.091821 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 13 21:59:18.091899 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 13 21:59:18.091990 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 13 21:59:18.092087 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:59:18.092100 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:59:18.092109 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:59:18.092118 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 13 21:59:18.092127 kernel: Initialise system trusted keyrings Jan 13 21:59:18.092135 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:59:18.092144 kernel: Key type asymmetric registered Jan 13 21:59:18.092153 kernel: Asymmetric key parser 'x509' registered Jan 13 21:59:18.092164 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:59:18.092173 kernel: io scheduler mq-deadline registered Jan 13 21:59:18.092181 kernel: io scheduler kyber registered Jan 13 21:59:18.092190 kernel: io scheduler bfq registered Jan 13 21:59:18.092198 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:59:18.092208 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 13 21:59:18.092216 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:59:18.092225 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 21:59:18.092234 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:59:18.092244 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:59:18.092253 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:59:18.092261 kernel: random: crng init done Jan 13 21:59:18.092270 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:59:18.092279 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:59:18.092287 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:59:18.092376 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:59:18.092390 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:59:18.092475 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:59:18.092556 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:59:17 UTC (1736805557) Jan 13 21:59:18.092638 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 21:59:18.092651 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:59:18.092660 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:59:18.092669 kernel: Segment Routing with IPv6 Jan 13 21:59:18.092701 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:59:18.092709 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:59:18.092718 kernel: Key type dns_resolver registered Jan 13 21:59:18.092730 kernel: IPI shorthand broadcast: enabled Jan 13 21:59:18.092739 kernel: sched_clock: Marking stable (992007714, 166970010)->(1194703346, -35725622) Jan 13 21:59:18.092747 kernel: registered taskstats version 1 Jan 13 21:59:18.092756 kernel: Loading compiled-in X.509 certificates Jan 13 21:59:18.092765 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:59:18.092774 kernel: Key type .fscrypt registered Jan 13 21:59:18.092782 kernel: Key type fscrypt-provisioning registered Jan 13 21:59:18.092791 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:59:18.092801 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:59:18.092810 kernel: ima: No architecture policies found Jan 13 21:59:18.092818 kernel: clk: Disabling unused clocks Jan 13 21:59:18.092827 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:59:18.092835 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:59:18.092844 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:59:18.092853 kernel: Run /init as init process Jan 13 21:59:18.092861 kernel: with arguments: Jan 13 21:59:18.092870 kernel: /init Jan 13 21:59:18.092878 kernel: with environment: Jan 13 21:59:18.092888 kernel: HOME=/ Jan 13 21:59:18.092897 kernel: TERM=linux Jan 13 21:59:18.092905 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:59:18.092916 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:59:18.092928 systemd[1]: Detected virtualization kvm. Jan 13 21:59:18.092938 systemd[1]: Detected architecture x86-64. Jan 13 21:59:18.092947 systemd[1]: Running in initrd. Jan 13 21:59:18.092957 systemd[1]: No hostname configured, using default hostname. Jan 13 21:59:18.092966 systemd[1]: Hostname set to . Jan 13 21:59:18.092976 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:59:18.092985 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:59:18.092994 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:59:18.093004 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:59:18.093014 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:59:18.093031 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:59:18.093062 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:59:18.093072 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:59:18.093083 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:59:18.093093 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:59:18.093105 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:59:18.093115 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:59:18.093124 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:59:18.093134 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:59:18.093143 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:59:18.093153 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:59:18.093162 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:59:18.093172 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:59:18.093181 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:59:18.093193 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:59:18.093202 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:59:18.093212 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:59:18.093221 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:59:18.093231 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:59:18.093240 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:59:18.093250 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:59:18.093259 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:59:18.093270 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:59:18.093280 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:59:18.093289 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:59:18.093299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:59:18.093309 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:59:18.093318 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:59:18.093346 systemd-journald[184]: Collecting audit messages is disabled. Jan 13 21:59:18.093370 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:59:18.093384 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:59:18.093394 systemd-journald[184]: Journal started Jan 13 21:59:18.093416 systemd-journald[184]: Runtime Journal (/run/log/journal/bafbbb0b43b24983886138b061564a77) is 8.0M, max 78.3M, 70.3M free. Jan 13 21:59:18.108707 systemd-modules-load[185]: Inserted module 'overlay' Jan 13 21:59:18.151059 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:59:18.151080 kernel: Bridge firewalling registered Jan 13 21:59:18.151092 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:59:18.138549 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 13 21:59:18.152625 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:59:18.153368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:59:18.154641 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:59:18.159795 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:59:18.162807 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:59:18.163877 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:59:18.168743 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:59:18.181257 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:59:18.182721 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:59:18.188210 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:59:18.189005 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:59:18.193796 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:59:18.195789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:59:18.209968 dracut-cmdline[216]: dracut-dracut-053 Jan 13 21:59:18.214297 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:59:18.230616 systemd-resolved[218]: Positive Trust Anchors: Jan 13 21:59:18.230633 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:59:18.231152 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:59:18.234266 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 13 21:59:18.235146 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:59:18.237630 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:59:18.294766 kernel: SCSI subsystem initialized Jan 13 21:59:18.305731 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:59:18.318050 kernel: iscsi: registered transport (tcp) Jan 13 21:59:18.341201 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:59:18.341270 kernel: QLogic iSCSI HBA Driver Jan 13 21:59:18.394351 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:59:18.399956 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:59:18.449908 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:59:18.450013 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:59:18.450731 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:59:18.496732 kernel: raid6: sse2x4 gen() 12988 MB/s Jan 13 21:59:18.514730 kernel: raid6: sse2x2 gen() 15112 MB/s Jan 13 21:59:18.533029 kernel: raid6: sse2x1 gen() 10151 MB/s Jan 13 21:59:18.533116 kernel: raid6: using algorithm sse2x2 gen() 15112 MB/s Jan 13 21:59:18.552150 kernel: raid6: .... xor() 9381 MB/s, rmw enabled Jan 13 21:59:18.552210 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 21:59:18.574746 kernel: xor: measuring software checksum speed Jan 13 21:59:18.574808 kernel: prefetch64-sse : 18483 MB/sec Jan 13 21:59:18.576034 kernel: generic_sse : 16799 MB/sec Jan 13 21:59:18.578829 kernel: xor: using function: prefetch64-sse (18483 MB/sec) Jan 13 21:59:18.757738 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:59:18.772686 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:59:18.778986 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:59:18.791541 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 13 21:59:18.795826 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:59:18.807991 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:59:18.827076 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 13 21:59:18.865951 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:59:18.875003 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:59:18.938189 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:59:18.943854 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:59:18.976731 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:59:18.978162 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:59:18.981327 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:59:18.984007 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:59:18.992965 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:59:19.021104 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:59:19.034783 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 13 21:59:19.071840 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 13 21:59:19.071955 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:59:19.071970 kernel: GPT:17805311 != 20971519 Jan 13 21:59:19.071981 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:59:19.071993 kernel: GPT:17805311 != 20971519 Jan 13 21:59:19.072009 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:59:19.072020 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:59:19.072031 kernel: libata version 3.00 loaded. Jan 13 21:59:19.049319 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:59:19.049446 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:59:19.073876 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 13 21:59:19.080946 kernel: scsi host0: ata_piix Jan 13 21:59:19.081109 kernel: scsi host1: ata_piix Jan 13 21:59:19.081225 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 13 21:59:19.081244 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 13 21:59:19.050128 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:59:19.050706 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:59:19.050826 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:59:19.051349 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:59:19.067991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:59:19.126303 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) Jan 13 21:59:19.136144 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:59:19.145370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:59:19.151814 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:59:19.162171 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (456) Jan 13 21:59:19.165485 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:59:19.171119 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:59:19.171728 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:59:19.181802 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:59:19.271185 disk-uuid[515]: Primary Header is updated. Jan 13 21:59:19.271185 disk-uuid[515]: Secondary Entries is updated. Jan 13 21:59:19.271185 disk-uuid[515]: Secondary Header is updated. Jan 13 21:59:19.294717 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:59:19.295640 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:59:19.313565 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:59:20.293780 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:59:20.294356 disk-uuid[516]: The operation has completed successfully. Jan 13 21:59:20.373242 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:59:20.374466 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:59:20.397788 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:59:20.414939 sh[528]: Success Jan 13 21:59:20.446297 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 13 21:59:20.510071 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:59:20.510864 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:59:20.512986 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:59:20.533794 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:59:20.533827 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:59:20.533840 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:59:20.537915 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:59:20.537936 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:59:20.556322 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:59:20.558599 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:59:20.566027 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:59:20.576930 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:59:20.600044 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:59:20.600119 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:59:20.604260 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:59:20.616766 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:59:20.634090 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:59:20.640755 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:59:20.652566 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:59:20.661447 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:59:20.713229 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:59:20.719853 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:59:20.742295 systemd-networkd[710]: lo: Link UP Jan 13 21:59:20.742305 systemd-networkd[710]: lo: Gained carrier Jan 13 21:59:20.743715 systemd-networkd[710]: Enumeration completed Jan 13 21:59:20.743787 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:59:20.744783 systemd[1]: Reached target network.target - Network. Jan 13 21:59:20.745420 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:59:20.745424 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:59:20.746871 systemd-networkd[710]: eth0: Link UP Jan 13 21:59:20.746874 systemd-networkd[710]: eth0: Gained carrier Jan 13 21:59:20.746882 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:59:20.767119 systemd-networkd[710]: eth0: DHCPv4 address 172.24.4.53/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 21:59:20.799272 ignition[655]: Ignition 2.19.0 Jan 13 21:59:20.799284 ignition[655]: Stage: fetch-offline Jan 13 21:59:20.800909 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:59:20.799321 ignition[655]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:59:20.799332 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:59:20.799425 ignition[655]: parsed url from cmdline: "" Jan 13 21:59:20.799429 ignition[655]: no config URL provided Jan 13 21:59:20.799435 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:59:20.799444 ignition[655]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:59:20.799449 ignition[655]: failed to fetch config: resource requires networking Jan 13 21:59:20.799634 ignition[655]: Ignition finished successfully Jan 13 21:59:20.807859 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:59:20.821206 ignition[720]: Ignition 2.19.0 Jan 13 21:59:20.821217 ignition[720]: Stage: fetch Jan 13 21:59:20.821388 ignition[720]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:59:20.821400 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:59:20.821494 ignition[720]: parsed url from cmdline: "" Jan 13 21:59:20.821498 ignition[720]: no config URL provided Jan 13 21:59:20.821504 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:59:20.821512 ignition[720]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:59:20.821624 ignition[720]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 21:59:20.821756 ignition[720]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 21:59:20.821790 ignition[720]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 21:59:21.107474 ignition[720]: GET result: OK Jan 13 21:59:21.107622 ignition[720]: parsing config with SHA512: 30acffe171a1d64c1ec0cbab1fc1f37e146a4a3d64d15a49485ab71a151ef027da35182048869916b8de1206b18804372ea130fe06ff9a0f407fef683d232353 Jan 13 21:59:21.116924 unknown[720]: fetched base config from "system" Jan 13 21:59:21.116943 unknown[720]: fetched base config from "system" Jan 13 21:59:21.117782 ignition[720]: fetch: fetch complete Jan 13 21:59:21.116956 unknown[720]: fetched user config from "openstack" Jan 13 21:59:21.117792 ignition[720]: fetch: fetch passed Jan 13 21:59:21.121974 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:59:21.117871 ignition[720]: Ignition finished successfully Jan 13 21:59:21.131010 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:59:21.164913 ignition[726]: Ignition 2.19.0 Jan 13 21:59:21.164939 ignition[726]: Stage: kargs Jan 13 21:59:21.165375 ignition[726]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:59:21.165402 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:59:21.169723 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:59:21.167617 ignition[726]: kargs: kargs passed Jan 13 21:59:21.167766 ignition[726]: Ignition finished successfully Jan 13 21:59:21.179023 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:59:21.220191 ignition[732]: Ignition 2.19.0 Jan 13 21:59:21.221767 ignition[732]: Stage: disks Jan 13 21:59:21.222174 ignition[732]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:59:21.222201 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:59:21.229152 ignition[732]: disks: disks passed Jan 13 21:59:21.230524 ignition[732]: Ignition finished successfully Jan 13 21:59:21.233648 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:59:21.235989 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:59:21.238092 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:59:21.241138 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:59:21.244127 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:59:21.246787 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:59:21.254969 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:59:21.297002 systemd-fsck[740]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:59:21.310971 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:59:21.317866 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:59:21.481722 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:59:21.481936 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:59:21.483485 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:59:21.495780 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:59:21.498763 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:59:21.499449 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:59:21.502910 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 21:59:21.506206 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:59:21.506237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:59:21.511825 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:59:21.517733 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (748) Jan 13 21:59:21.523105 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:59:21.523130 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:59:21.523143 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:59:21.523355 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:59:21.543707 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:59:21.548065 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:59:21.660565 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:59:21.671664 initrd-setup-root[785]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:59:21.681159 initrd-setup-root[792]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:59:21.687986 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:59:21.834871 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:59:21.840859 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:59:21.844932 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:59:21.868077 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:59:21.875818 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:59:21.907414 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:59:21.915761 ignition[867]: INFO : Ignition 2.19.0 Jan 13 21:59:21.915761 ignition[867]: INFO : Stage: mount Jan 13 21:59:21.918005 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:59:21.918005 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:59:21.920431 ignition[867]: INFO : mount: mount passed Jan 13 21:59:21.921118 ignition[867]: INFO : Ignition finished successfully Jan 13 21:59:21.922972 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:59:22.328609 systemd-networkd[710]: eth0: Gained IPv6LL Jan 13 21:59:28.743990 coreos-metadata[750]: Jan 13 21:59:28.743 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:59:28.768863 coreos-metadata[750]: Jan 13 21:59:28.768 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:59:28.784699 coreos-metadata[750]: Jan 13 21:59:28.784 INFO Fetch successful Jan 13 21:59:28.786290 coreos-metadata[750]: Jan 13 21:59:28.784 INFO wrote hostname ci-4081-3-0-2-f00902ecfa.novalocal to /sysroot/etc/hostname Jan 13 21:59:28.787712 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 21:59:28.787855 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 21:59:28.800755 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:59:28.806508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:59:28.834742 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (883) Jan 13 21:59:28.840831 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:59:28.840898 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:59:28.843065 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:59:28.850780 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:59:28.854479 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:59:28.889324 ignition[901]: INFO : Ignition 2.19.0 Jan 13 21:59:28.889324 ignition[901]: INFO : Stage: files Jan 13 21:59:28.891030 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:59:28.891030 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:59:28.892866 ignition[901]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:59:28.894227 ignition[901]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:59:28.894227 ignition[901]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:59:28.899706 ignition[901]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:59:28.900815 ignition[901]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:59:28.900815 ignition[901]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:59:28.900516 unknown[901]: wrote ssh authorized keys file for user: core Jan 13 21:59:28.903879 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:59:28.903879 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:59:28.962118 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:59:29.388083 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:59:29.388083 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:59:29.392772 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:59:29.392772 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:59:29.392772 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:59:29.392772 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:59:29.392772 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:59:29.392772 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:59:29.392772 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:59:29.392772 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:59:29.392772 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:59:29.392772 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:59:29.392772 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:59:29.392772 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:59:29.392772 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:59:29.922482 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:59:31.638466 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:59:31.638466 ignition[901]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:59:31.642735 ignition[901]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:59:31.642735 ignition[901]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:59:31.642735 ignition[901]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:59:31.642735 ignition[901]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:59:31.642735 ignition[901]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:59:31.642735 ignition[901]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:59:31.642735 ignition[901]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:59:31.642735 ignition[901]: INFO : files: files passed Jan 13 21:59:31.642735 ignition[901]: INFO : Ignition finished successfully Jan 13 21:59:31.642746 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:59:31.655028 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:59:31.660447 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:59:31.663120 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:59:31.663731 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:59:31.687577 initrd-setup-root-after-ignition[929]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:59:31.687577 initrd-setup-root-after-ignition[929]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:59:31.689446 initrd-setup-root-after-ignition[933]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:59:31.691389 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:59:31.694169 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:59:31.700952 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:59:31.731990 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:59:31.732199 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:59:31.735364 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:59:31.737519 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:59:31.746891 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:59:31.753909 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:59:31.783079 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:59:31.793992 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:59:31.818168 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:59:31.819940 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:59:31.823022 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:59:31.825913 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:59:31.826201 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:59:31.829316 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:59:31.831215 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:59:31.834114 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:59:31.836555 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:59:31.839122 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:59:31.842045 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:59:31.844962 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:59:31.848168 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:59:31.851011 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:59:31.853961 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:59:31.856594 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:59:31.856925 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:59:31.860039 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:59:31.862003 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:59:31.864355 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:59:31.865768 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:59:31.867444 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:59:31.867773 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:59:31.871483 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:59:31.871946 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:59:31.874951 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:59:31.875214 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:59:31.886239 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:59:31.896775 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:59:31.897407 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:59:31.897599 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:59:31.902881 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:59:31.904787 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:59:31.913227 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:59:31.914036 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:59:31.917997 ignition[953]: INFO : Ignition 2.19.0 Jan 13 21:59:31.917997 ignition[953]: INFO : Stage: umount Jan 13 21:59:31.920346 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:59:31.920346 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:59:31.920346 ignition[953]: INFO : umount: umount passed Jan 13 21:59:31.920346 ignition[953]: INFO : Ignition finished successfully Jan 13 21:59:31.920994 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:59:31.921097 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:59:31.922348 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:59:31.922423 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:59:31.923795 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:59:31.923840 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:59:31.924483 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:59:31.924524 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:59:31.927079 systemd[1]: Stopped target network.target - Network. Jan 13 21:59:31.928249 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:59:31.928299 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:59:31.930779 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:59:31.931280 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:59:31.934723 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:59:31.935312 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:59:31.936020 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:59:31.936523 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:59:31.936559 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:59:31.938799 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:59:31.938831 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:59:31.939581 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:59:31.939621 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:59:31.940138 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:59:31.940179 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:59:31.941859 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:59:31.943205 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:59:31.945204 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:59:31.947717 systemd-networkd[710]: eth0: DHCPv6 lease lost Jan 13 21:59:31.948948 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:59:31.949080 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:59:31.951480 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:59:31.951568 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:59:31.953775 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:59:31.953828 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:59:31.960796 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:59:31.961348 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:59:31.961403 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:59:31.963399 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:59:31.963441 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:59:31.964569 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:59:31.964612 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:59:31.966537 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:59:31.966581 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:59:31.967782 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:59:31.983782 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:59:31.983892 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:59:31.985057 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:59:31.985175 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:59:31.986489 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:59:31.986541 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:59:31.987903 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:59:31.987935 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:59:31.989075 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:59:31.989117 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:59:31.990660 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:59:31.990717 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:59:31.991811 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:59:31.991852 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:59:32.001808 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:59:32.004017 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:59:32.004075 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:59:32.005330 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:59:32.005371 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:59:32.007022 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:59:32.007063 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:59:32.007588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:59:32.007626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:59:32.008404 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:59:32.008478 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:59:32.195064 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:59:32.195303 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:59:32.199037 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:59:32.200512 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:59:32.200635 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:59:32.215976 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:59:32.236615 systemd[1]: Switching root. Jan 13 21:59:32.285964 systemd-journald[184]: Journal stopped Jan 13 21:59:34.010611 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 13 21:59:34.013067 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:59:34.013111 kernel: SELinux: policy capability open_perms=1 Jan 13 21:59:34.013124 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:59:34.013136 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:59:34.013148 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:59:34.013161 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:59:34.013172 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:59:34.013183 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:59:34.013195 kernel: audit: type=1403 audit(1736805572.700:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:59:34.013211 systemd[1]: Successfully loaded SELinux policy in 74.870ms. Jan 13 21:59:34.013229 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.005ms. Jan 13 21:59:34.013244 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:59:34.013258 systemd[1]: Detected virtualization kvm. Jan 13 21:59:34.013271 systemd[1]: Detected architecture x86-64. Jan 13 21:59:34.013284 systemd[1]: Detected first boot. Jan 13 21:59:34.013297 systemd[1]: Hostname set to . Jan 13 21:59:34.013310 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:59:34.013322 zram_generator::config[995]: No configuration found. Jan 13 21:59:34.013338 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:59:34.013350 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:59:34.013363 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:59:34.013375 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:59:34.013388 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:59:34.013401 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:59:34.013414 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:59:34.013427 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:59:34.013441 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:59:34.013454 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:59:34.013467 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:59:34.013480 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:59:34.013492 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:59:34.013505 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:59:34.013518 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:59:34.013531 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:59:34.013546 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:59:34.013561 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:59:34.013573 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:59:34.013586 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:59:34.013598 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:59:34.013611 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:59:34.013626 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:59:34.013638 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:59:34.013651 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:59:34.013663 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:59:34.013691 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:59:34.013705 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:59:34.013718 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:59:34.013731 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:59:34.013748 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:59:34.013761 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:59:34.013776 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:59:34.013788 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:59:34.013804 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:59:34.013816 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:59:34.013829 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:59:34.013842 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:59:34.013856 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:59:34.013869 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:59:34.013882 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:59:34.013897 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:59:34.013910 systemd[1]: Reached target machines.target - Containers. Jan 13 21:59:34.013923 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:59:34.013935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:59:34.013948 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:59:34.013961 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:59:34.013973 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:59:34.013986 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:59:34.014002 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:59:34.014014 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:59:34.014027 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:59:34.014040 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:59:34.014052 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:59:34.014065 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:59:34.014077 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:59:34.014090 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:59:34.014104 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:59:34.014116 kernel: fuse: init (API version 7.39) Jan 13 21:59:34.014133 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:59:34.014145 kernel: loop: module loaded Jan 13 21:59:34.014157 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:59:34.014169 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:59:34.014182 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:59:34.014194 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:59:34.014207 systemd[1]: Stopped verity-setup.service. Jan 13 21:59:34.014220 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:59:34.014257 systemd-journald[1084]: Collecting audit messages is disabled. Jan 13 21:59:34.014285 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:59:34.014298 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:59:34.014311 systemd-journald[1084]: Journal started Jan 13 21:59:34.014336 systemd-journald[1084]: Runtime Journal (/run/log/journal/bafbbb0b43b24983886138b061564a77) is 8.0M, max 78.3M, 70.3M free. Jan 13 21:59:33.665615 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:59:33.684310 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:59:33.684724 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:59:34.019701 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:59:34.020757 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:59:34.021327 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:59:34.022871 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:59:34.023451 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:59:34.024734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:59:34.025543 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:59:34.025752 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:59:34.027062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:59:34.027726 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:59:34.028441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:59:34.028560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:59:34.034410 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:59:34.034534 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:59:34.035273 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:59:34.035389 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:59:34.036132 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:59:34.036859 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:59:34.037589 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:59:34.052135 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:59:34.064994 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:59:34.066752 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:59:34.067294 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:59:34.067325 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:59:34.071055 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:59:34.096017 kernel: ACPI: bus type drm_connector registered Jan 13 21:59:34.093966 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:59:34.096857 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:59:34.098464 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:59:34.102103 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:59:34.103613 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:59:34.104268 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:59:34.107771 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:59:34.112145 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:59:34.127391 systemd-journald[1084]: Time spent on flushing to /var/log/journal/bafbbb0b43b24983886138b061564a77 is 64.224ms for 936 entries. Jan 13 21:59:34.127391 systemd-journald[1084]: System Journal (/var/log/journal/bafbbb0b43b24983886138b061564a77) is 8.0M, max 584.8M, 576.8M free. Jan 13 21:59:34.204186 systemd-journald[1084]: Received client request to flush runtime journal. Jan 13 21:59:34.204232 kernel: loop0: detected capacity change from 0 to 140768 Jan 13 21:59:34.117901 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:59:34.119859 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:59:34.122807 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:59:34.125368 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:59:34.126388 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:59:34.126509 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:59:34.127373 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:59:34.132925 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:59:34.133780 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:59:34.134879 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:59:34.146593 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:59:34.157792 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:59:34.158461 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:59:34.171288 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:59:34.185667 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:59:34.196310 udevadm[1135]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:59:34.210467 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:59:34.296836 systemd-tmpfiles[1127]: ACLs are not supported, ignoring. Jan 13 21:59:34.296879 systemd-tmpfiles[1127]: ACLs are not supported, ignoring. Jan 13 21:59:34.307645 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:59:34.316963 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:59:34.325657 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:59:34.329604 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:59:34.362296 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:59:34.393709 kernel: loop1: detected capacity change from 0 to 211296 Jan 13 21:59:34.651855 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:59:34.661053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:59:34.708879 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jan 13 21:59:34.708923 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jan 13 21:59:34.717846 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:59:34.820572 kernel: loop2: detected capacity change from 0 to 142488 Jan 13 21:59:34.914174 kernel: loop3: detected capacity change from 0 to 8 Jan 13 21:59:34.944747 kernel: loop4: detected capacity change from 0 to 140768 Jan 13 21:59:35.011731 kernel: loop5: detected capacity change from 0 to 211296 Jan 13 21:59:35.059697 kernel: loop6: detected capacity change from 0 to 142488 Jan 13 21:59:35.126620 kernel: loop7: detected capacity change from 0 to 8 Jan 13 21:59:35.122321 (sd-merge)[1156]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 21:59:35.122744 (sd-merge)[1156]: Merged extensions into '/usr'. Jan 13 21:59:35.133489 systemd[1]: Reloading requested from client PID 1126 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:59:35.133507 systemd[1]: Reloading... Jan 13 21:59:35.235700 zram_generator::config[1178]: No configuration found. Jan 13 21:59:35.402617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:59:35.458942 systemd[1]: Reloading finished in 324 ms. Jan 13 21:59:35.494448 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:59:35.496279 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:59:35.509969 systemd[1]: Starting ensure-sysext.service... Jan 13 21:59:35.512009 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:59:35.515448 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:59:35.539760 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:59:35.539780 systemd[1]: Reloading... Jan 13 21:59:35.544482 ldconfig[1121]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:59:35.573471 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:59:35.574919 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:59:35.575242 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Jan 13 21:59:35.576010 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:59:35.576323 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 13 21:59:35.576384 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 13 21:59:35.585279 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:59:35.585291 systemd-tmpfiles[1239]: Skipping /boot Jan 13 21:59:35.605665 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:59:35.605702 systemd-tmpfiles[1239]: Skipping /boot Jan 13 21:59:35.616133 zram_generator::config[1264]: No configuration found. Jan 13 21:59:35.754713 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1284) Jan 13 21:59:35.769698 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 13 21:59:35.812785 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:59:35.812813 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:59:35.835096 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:59:35.873725 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 21:59:35.889702 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:59:35.907461 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 13 21:59:35.907520 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 13 21:59:35.913312 kernel: Console: switching to colour dummy device 80x25 Jan 13 21:59:35.913387 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 21:59:35.913406 kernel: [drm] features: -context_init Jan 13 21:59:35.915201 kernel: [drm] number of scanouts: 1 Jan 13 21:59:35.915236 kernel: [drm] number of cap sets: 0 Jan 13 21:59:35.918694 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 13 21:59:35.920480 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:59:35.920761 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:59:35.920977 systemd[1]: Reloading finished in 380 ms. Jan 13 21:59:35.922728 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 13 21:59:35.930214 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 21:59:35.937578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:59:35.938705 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 21:59:35.941964 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:59:35.948130 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:59:35.974006 systemd[1]: Finished ensure-sysext.service. Jan 13 21:59:35.981109 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:59:35.993740 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:59:36.000803 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:59:36.005802 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:59:36.007857 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:59:36.009870 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:59:36.019851 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:59:36.022373 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:59:36.025812 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:59:36.029973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:59:36.030173 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:59:36.032140 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:59:36.046900 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:59:36.049643 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:59:36.057901 lvm[1362]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:59:36.057861 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:59:36.062849 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:59:36.066873 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:59:36.069837 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:59:36.069918 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:59:36.070584 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:59:36.071773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:59:36.072078 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:59:36.072194 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:59:36.072455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:59:36.072569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:59:36.073045 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:59:36.073191 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:59:36.084286 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:59:36.089156 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:59:36.089230 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:59:36.103872 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:59:36.119989 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:59:36.126318 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:59:36.127447 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:59:36.136836 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:59:36.139906 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:59:36.146824 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:59:36.153927 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:59:36.164470 augenrules[1403]: No rules Jan 13 21:59:36.165328 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:59:36.168758 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:59:36.171622 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:59:36.177821 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:59:36.181198 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:59:36.194378 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:59:36.259171 systemd-networkd[1375]: lo: Link UP Jan 13 21:59:36.259179 systemd-networkd[1375]: lo: Gained carrier Jan 13 21:59:36.261408 systemd-networkd[1375]: Enumeration completed Jan 13 21:59:36.261813 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:59:36.261821 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:59:36.262112 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:59:36.262476 systemd-networkd[1375]: eth0: Link UP Jan 13 21:59:36.262480 systemd-networkd[1375]: eth0: Gained carrier Jan 13 21:59:36.262493 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:59:36.263778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:59:36.268819 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:59:36.280723 systemd-networkd[1375]: eth0: DHCPv4 address 172.24.4.53/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 21:59:36.295823 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:59:36.296643 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:59:36.307482 systemd-resolved[1376]: Positive Trust Anchors: Jan 13 21:59:36.307502 systemd-resolved[1376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:59:36.307544 systemd-resolved[1376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:59:36.313151 systemd-resolved[1376]: Using system hostname 'ci-4081-3-0-2-f00902ecfa.novalocal'. Jan 13 21:59:36.314572 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:59:36.315540 systemd[1]: Reached target network.target - Network. Jan 13 21:59:36.316068 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:59:36.316540 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:59:36.318829 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:59:36.320253 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:59:36.321793 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:59:36.323376 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:59:36.324610 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:59:36.325509 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:59:36.325614 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:59:36.327084 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:59:36.327200 systemd-timesyncd[1378]: Contacted time server 82.66.40.79:123 (0.flatcar.pool.ntp.org). Jan 13 21:59:36.327256 systemd-timesyncd[1378]: Initial clock synchronization to Mon 2025-01-13 21:59:36.647611 UTC. Jan 13 21:59:36.328485 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:59:36.331275 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:59:36.336799 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:59:36.338622 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:59:36.341461 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:59:36.341974 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:59:36.342482 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:59:36.342513 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:59:36.348759 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:59:36.352695 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:59:36.358943 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:59:36.368776 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:59:36.372948 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:59:36.373476 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:59:36.377953 jq[1432]: false Jan 13 21:59:36.378891 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:59:36.384833 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:59:36.394985 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:59:36.398903 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:59:36.402759 extend-filesystems[1433]: Found loop4 Jan 13 21:59:36.412493 extend-filesystems[1433]: Found loop5 Jan 13 21:59:36.412493 extend-filesystems[1433]: Found loop6 Jan 13 21:59:36.412493 extend-filesystems[1433]: Found loop7 Jan 13 21:59:36.412493 extend-filesystems[1433]: Found vda Jan 13 21:59:36.412493 extend-filesystems[1433]: Found vda1 Jan 13 21:59:36.412493 extend-filesystems[1433]: Found vda2 Jan 13 21:59:36.412493 extend-filesystems[1433]: Found vda3 Jan 13 21:59:36.412493 extend-filesystems[1433]: Found usr Jan 13 21:59:36.412493 extend-filesystems[1433]: Found vda4 Jan 13 21:59:36.412493 extend-filesystems[1433]: Found vda6 Jan 13 21:59:36.412493 extend-filesystems[1433]: Found vda7 Jan 13 21:59:36.412493 extend-filesystems[1433]: Found vda9 Jan 13 21:59:36.412493 extend-filesystems[1433]: Checking size of /dev/vda9 Jan 13 21:59:36.405812 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:59:36.441609 dbus-daemon[1431]: [system] SELinux support is enabled Jan 13 21:59:36.415190 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:59:36.416850 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:59:36.421878 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:59:36.439389 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:59:36.483398 update_engine[1441]: I20250113 21:59:36.478408 1441 main.cc:92] Flatcar Update Engine starting Jan 13 21:59:36.444507 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:59:36.483651 jq[1443]: true Jan 13 21:59:36.464178 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:59:36.464334 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:59:36.469040 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:59:36.469187 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:59:36.485769 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:59:36.485799 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:59:36.488167 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:59:36.488185 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:59:36.493727 extend-filesystems[1433]: Resized partition /dev/vda9 Jan 13 21:59:36.501385 update_engine[1441]: I20250113 21:59:36.496068 1441 update_check_scheduler.cc:74] Next update check in 7m59s Jan 13 21:59:36.503449 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:59:36.508887 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:59:36.528169 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1274) Jan 13 21:59:36.528352 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 13 21:59:36.520834 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:59:36.527965 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:59:36.528747 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:59:36.543608 jq[1454]: true Jan 13 21:59:36.549749 tar[1452]: linux-amd64/helm Jan 13 21:59:36.562003 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:59:36.563728 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 13 21:59:36.592368 systemd-logind[1439]: New seat seat0. Jan 13 21:59:36.637168 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:59:36.637187 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:59:36.639741 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:59:36.659128 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:59:36.659128 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:59:36.659128 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 13 21:59:36.677521 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Jan 13 21:59:36.659519 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:59:36.659769 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:59:36.696043 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:59:36.697982 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:59:36.713158 systemd[1]: Starting sshkeys.service... Jan 13 21:59:36.731232 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:59:36.747062 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:59:36.763930 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:59:37.003263 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:59:37.033374 containerd[1461]: time="2025-01-13T21:59:37.033299784Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:59:37.058889 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:59:37.070847 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:59:37.071157 containerd[1461]: time="2025-01-13T21:59:37.071121335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:59:37.072589 containerd[1461]: time="2025-01-13T21:59:37.072559479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:59:37.072658 containerd[1461]: time="2025-01-13T21:59:37.072643621Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:59:37.072784 containerd[1461]: time="2025-01-13T21:59:37.072766684Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:59:37.073000 containerd[1461]: time="2025-01-13T21:59:37.072980893Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:59:37.073214 containerd[1461]: time="2025-01-13T21:59:37.073053333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:59:37.073214 containerd[1461]: time="2025-01-13T21:59:37.073128661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:59:37.073214 containerd[1461]: time="2025-01-13T21:59:37.073145416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:59:37.073428 containerd[1461]: time="2025-01-13T21:59:37.073406183Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:59:37.073487 containerd[1461]: time="2025-01-13T21:59:37.073473757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:59:37.073553 containerd[1461]: time="2025-01-13T21:59:37.073536810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:59:37.074282 containerd[1461]: time="2025-01-13T21:59:37.073596830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:59:37.074282 containerd[1461]: time="2025-01-13T21:59:37.073682339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:59:37.074282 containerd[1461]: time="2025-01-13T21:59:37.073911948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:59:37.074282 containerd[1461]: time="2025-01-13T21:59:37.074009116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:59:37.074282 containerd[1461]: time="2025-01-13T21:59:37.074025632Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:59:37.074282 containerd[1461]: time="2025-01-13T21:59:37.074105356Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:59:37.074282 containerd[1461]: time="2025-01-13T21:59:37.074156384Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:59:37.085272 containerd[1461]: time="2025-01-13T21:59:37.085247611Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:59:37.086733 containerd[1461]: time="2025-01-13T21:59:37.085369152Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:59:37.086733 containerd[1461]: time="2025-01-13T21:59:37.085392701Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:59:37.086876 containerd[1461]: time="2025-01-13T21:59:37.086857541Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:59:37.086966 containerd[1461]: time="2025-01-13T21:59:37.086951083Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:59:37.087183 containerd[1461]: time="2025-01-13T21:59:37.087161947Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:59:37.087601 containerd[1461]: time="2025-01-13T21:59:37.087566730Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:59:37.087770 containerd[1461]: time="2025-01-13T21:59:37.087747967Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:59:37.087810 containerd[1461]: time="2025-01-13T21:59:37.087774144Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:59:37.087810 containerd[1461]: time="2025-01-13T21:59:37.087790524Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:59:37.087810 containerd[1461]: time="2025-01-13T21:59:37.087806050Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:59:37.087899 containerd[1461]: time="2025-01-13T21:59:37.087820596Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:59:37.087899 containerd[1461]: time="2025-01-13T21:59:37.087834413Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088035305Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088072931Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088092699Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088139319Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088174841Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088236279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088256713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088308929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088453321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088488687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088513758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088537183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088554001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.088741 containerd[1461]: time="2025-01-13T21:59:37.088578395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.089071 containerd[1461]: time="2025-01-13T21:59:37.088603559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.089071 containerd[1461]: time="2025-01-13T21:59:37.088624442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.089071 containerd[1461]: time="2025-01-13T21:59:37.088644511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.089071 containerd[1461]: time="2025-01-13T21:59:37.088665487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.089071 containerd[1461]: time="2025-01-13T21:59:37.088692256Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:59:37.090944 containerd[1461]: time="2025-01-13T21:59:37.090746108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.090944 containerd[1461]: time="2025-01-13T21:59:37.090796219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.090944 containerd[1461]: time="2025-01-13T21:59:37.090819101Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:59:37.090944 containerd[1461]: time="2025-01-13T21:59:37.090898994Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:59:37.091197 containerd[1461]: time="2025-01-13T21:59:37.090928649Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:59:37.092015 containerd[1461]: time="2025-01-13T21:59:37.091760348Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:59:37.092015 containerd[1461]: time="2025-01-13T21:59:37.091793192Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:59:37.092015 containerd[1461]: time="2025-01-13T21:59:37.091811187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.092015 containerd[1461]: time="2025-01-13T21:59:37.091848252Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:59:37.092015 containerd[1461]: time="2025-01-13T21:59:37.091866654Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:59:37.092015 containerd[1461]: time="2025-01-13T21:59:37.091884254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:59:37.094419 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.093069270Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.093150590Z" level=info msg="Connect containerd service" Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.093193573Z" level=info msg="using legacy CRI server" Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.093202691Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.093296149Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.093857090Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.094095149Z" level=info msg="Start subscribing containerd event" Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.094141915Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.094146177Z" level=info msg="Start recovering state" Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.094198486Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.094244595Z" level=info msg="Start event monitor" Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.094266801Z" level=info msg="Start snapshots syncer" Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.094291132Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:59:37.095385 containerd[1461]: time="2025-01-13T21:59:37.094301562Z" level=info msg="Start streaming server" Jan 13 21:59:37.098358 containerd[1461]: time="2025-01-13T21:59:37.098313641Z" level=info msg="containerd successfully booted in 0.066335s" Jan 13 21:59:37.099790 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:59:37.099965 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:59:37.112603 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:59:37.129405 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:59:37.141191 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:59:37.155119 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:59:37.158248 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:59:37.170613 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:59:37.180088 systemd[1]: Started sshd@0-172.24.4.53:22-172.24.4.1:33924.service - OpenSSH per-connection server daemon (172.24.4.1:33924). Jan 13 21:59:37.272222 tar[1452]: linux-amd64/LICENSE Jan 13 21:59:37.272410 tar[1452]: linux-amd64/README.md Jan 13 21:59:37.284231 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:59:37.754339 systemd-networkd[1375]: eth0: Gained IPv6LL Jan 13 21:59:37.762930 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:59:37.771308 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:59:37.786392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:59:37.794955 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:59:37.856998 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:59:38.358588 sshd[1523]: Accepted publickey for core from 172.24.4.1 port 33924 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:59:38.363563 sshd[1523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:59:38.383798 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:59:38.394043 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:59:38.401388 systemd-logind[1439]: New session 1 of user core. Jan 13 21:59:38.416502 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:59:38.426503 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:59:38.444502 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:59:38.561470 systemd[1543]: Queued start job for default target default.target. Jan 13 21:59:38.575563 systemd[1543]: Created slice app.slice - User Application Slice. Jan 13 21:59:38.575675 systemd[1543]: Reached target paths.target - Paths. Jan 13 21:59:38.575694 systemd[1543]: Reached target timers.target - Timers. Jan 13 21:59:38.578814 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:59:38.589209 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:59:38.589266 systemd[1543]: Reached target sockets.target - Sockets. Jan 13 21:59:38.589281 systemd[1543]: Reached target basic.target - Basic System. Jan 13 21:59:38.589316 systemd[1543]: Reached target default.target - Main User Target. Jan 13 21:59:38.589342 systemd[1543]: Startup finished in 138ms. Jan 13 21:59:38.590212 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:59:38.601463 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:59:39.004121 systemd[1]: Started sshd@1-172.24.4.53:22-172.24.4.1:59022.service - OpenSSH per-connection server daemon (172.24.4.1:59022). Jan 13 21:59:39.602022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:59:39.605155 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:59:40.995199 sshd[1554]: Accepted publickey for core from 172.24.4.1 port 59022 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:59:40.998979 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:59:41.012851 systemd-logind[1439]: New session 2 of user core. Jan 13 21:59:41.022252 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:59:41.645082 sshd[1554]: pam_unix(sshd:session): session closed for user core Jan 13 21:59:41.656969 systemd[1]: sshd@1-172.24.4.53:22-172.24.4.1:59022.service: Deactivated successfully. Jan 13 21:59:41.661275 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:59:41.664895 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:59:41.675585 systemd[1]: Started sshd@2-172.24.4.53:22-172.24.4.1:59030.service - OpenSSH per-connection server daemon (172.24.4.1:59030). Jan 13 21:59:41.689800 systemd-logind[1439]: Removed session 2. Jan 13 21:59:42.156132 kubelet[1561]: E0113 21:59:42.155956 1561 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:59:42.163887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:59:42.164198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:59:42.164762 systemd[1]: kubelet.service: Consumed 2.297s CPU time. Jan 13 21:59:42.208567 login[1521]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:59:42.218955 login[1520]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:59:42.220558 systemd-logind[1439]: New session 3 of user core. Jan 13 21:59:42.226890 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:59:42.231256 systemd-logind[1439]: New session 4 of user core. Jan 13 21:59:42.236896 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:59:43.039157 sshd[1575]: Accepted publickey for core from 172.24.4.1 port 59030 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:59:43.041870 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:59:43.051317 systemd-logind[1439]: New session 5 of user core. Jan 13 21:59:43.063103 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:59:43.422745 coreos-metadata[1428]: Jan 13 21:59:43.422 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:59:43.484587 coreos-metadata[1428]: Jan 13 21:59:43.484 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 21:59:43.677428 coreos-metadata[1428]: Jan 13 21:59:43.677 INFO Fetch successful Jan 13 21:59:43.677428 coreos-metadata[1428]: Jan 13 21:59:43.677 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:59:43.690448 coreos-metadata[1428]: Jan 13 21:59:43.690 INFO Fetch successful Jan 13 21:59:43.690448 coreos-metadata[1428]: Jan 13 21:59:43.690 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 21:59:43.703643 coreos-metadata[1428]: Jan 13 21:59:43.703 INFO Fetch successful Jan 13 21:59:43.703643 coreos-metadata[1428]: Jan 13 21:59:43.703 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 21:59:43.715262 coreos-metadata[1428]: Jan 13 21:59:43.715 INFO Fetch successful Jan 13 21:59:43.715262 coreos-metadata[1428]: Jan 13 21:59:43.715 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 21:59:43.725918 coreos-metadata[1428]: Jan 13 21:59:43.725 INFO Fetch successful Jan 13 21:59:43.725918 coreos-metadata[1428]: Jan 13 21:59:43.725 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 21:59:43.736652 coreos-metadata[1428]: Jan 13 21:59:43.736 INFO Fetch successful Jan 13 21:59:43.756764 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 13 21:59:43.765515 systemd[1]: sshd@2-172.24.4.53:22-172.24.4.1:59030.service: Deactivated successfully. Jan 13 21:59:43.778168 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:59:43.785230 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:59:43.790249 systemd-logind[1439]: Removed session 5. Jan 13 21:59:43.802063 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:59:43.804327 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:59:43.843319 coreos-metadata[1494]: Jan 13 21:59:43.842 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:59:43.886537 coreos-metadata[1494]: Jan 13 21:59:43.886 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 21:59:43.898588 coreos-metadata[1494]: Jan 13 21:59:43.898 INFO Fetch successful Jan 13 21:59:43.898588 coreos-metadata[1494]: Jan 13 21:59:43.898 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:59:43.908563 coreos-metadata[1494]: Jan 13 21:59:43.908 INFO Fetch successful Jan 13 21:59:43.916305 unknown[1494]: wrote ssh authorized keys file for user: core Jan 13 21:59:43.971987 update-ssh-keys[1615]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:59:43.974183 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:59:43.978654 systemd[1]: Finished sshkeys.service. Jan 13 21:59:43.985012 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:59:43.985497 systemd[1]: Startup finished in 1.220s (kernel) + 14.873s (initrd) + 11.359s (userspace) = 27.453s. Jan 13 21:59:52.411574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:59:52.421270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:59:52.771379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:59:52.784118 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:59:52.982753 kubelet[1627]: E0113 21:59:52.982612 1627 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:59:52.993142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:59:52.993673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:59:53.878476 systemd[1]: Started sshd@3-172.24.4.53:22-172.24.4.1:44238.service - OpenSSH per-connection server daemon (172.24.4.1:44238). Jan 13 21:59:55.089913 sshd[1636]: Accepted publickey for core from 172.24.4.1 port 44238 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:59:55.093120 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:59:55.105825 systemd-logind[1439]: New session 6 of user core. Jan 13 21:59:55.117048 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:59:55.734295 sshd[1636]: pam_unix(sshd:session): session closed for user core Jan 13 21:59:55.749560 systemd[1]: sshd@3-172.24.4.53:22-172.24.4.1:44238.service: Deactivated successfully. Jan 13 21:59:55.753850 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:59:55.757953 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:59:55.768417 systemd[1]: Started sshd@4-172.24.4.53:22-172.24.4.1:44254.service - OpenSSH per-connection server daemon (172.24.4.1:44254). Jan 13 21:59:55.770566 systemd-logind[1439]: Removed session 6. Jan 13 21:59:57.009426 sshd[1643]: Accepted publickey for core from 172.24.4.1 port 44254 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:59:57.012045 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:59:57.024067 systemd-logind[1439]: New session 7 of user core. Jan 13 21:59:57.033956 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:59:57.654591 sshd[1643]: pam_unix(sshd:session): session closed for user core Jan 13 21:59:57.666970 systemd[1]: sshd@4-172.24.4.53:22-172.24.4.1:44254.service: Deactivated successfully. Jan 13 21:59:57.670052 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:59:57.674144 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:59:57.679354 systemd[1]: Started sshd@5-172.24.4.53:22-172.24.4.1:44270.service - OpenSSH per-connection server daemon (172.24.4.1:44270). Jan 13 21:59:57.682635 systemd-logind[1439]: Removed session 7. Jan 13 21:59:59.024759 sshd[1650]: Accepted publickey for core from 172.24.4.1 port 44270 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:59:59.027811 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:59:59.038982 systemd-logind[1439]: New session 8 of user core. Jan 13 21:59:59.050808 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:59:59.625381 sshd[1650]: pam_unix(sshd:session): session closed for user core Jan 13 21:59:59.635428 systemd[1]: sshd@5-172.24.4.53:22-172.24.4.1:44270.service: Deactivated successfully. Jan 13 21:59:59.638650 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:59:59.641440 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:59:59.650256 systemd[1]: Started sshd@6-172.24.4.53:22-172.24.4.1:44274.service - OpenSSH per-connection server daemon (172.24.4.1:44274). Jan 13 21:59:59.653376 systemd-logind[1439]: Removed session 8. Jan 13 22:00:01.128953 sshd[1657]: Accepted publickey for core from 172.24.4.1 port 44274 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:00:01.131631 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:00:01.140536 systemd-logind[1439]: New session 9 of user core. Jan 13 22:00:01.153120 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 22:00:01.660869 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 22:00:01.661531 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:00:02.386000 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 22:00:02.391468 (dockerd)[1677]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 22:00:03.100715 dockerd[1677]: time="2025-01-13T22:00:03.100588067Z" level=info msg="Starting up" Jan 13 22:00:03.106181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 22:00:03.116956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:00:03.476925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:00:03.487125 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:00:03.523119 systemd[1]: var-lib-docker-metacopy\x2dcheck964334081-merged.mount: Deactivated successfully. Jan 13 22:00:03.554202 dockerd[1677]: time="2025-01-13T22:00:03.554003051Z" level=info msg="Loading containers: start." Jan 13 22:00:03.586456 kubelet[1704]: E0113 22:00:03.586409 1704 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:00:03.588770 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:00:03.588897 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:00:03.673718 kernel: Initializing XFRM netlink socket Jan 13 22:00:03.764025 systemd-networkd[1375]: docker0: Link UP Jan 13 22:00:03.793467 dockerd[1677]: time="2025-01-13T22:00:03.793280327Z" level=info msg="Loading containers: done." Jan 13 22:00:03.824892 dockerd[1677]: time="2025-01-13T22:00:03.824795742Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 22:00:03.825145 dockerd[1677]: time="2025-01-13T22:00:03.825013520Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 22:00:03.825282 dockerd[1677]: time="2025-01-13T22:00:03.825220806Z" level=info msg="Daemon has completed initialization" Jan 13 22:00:03.888267 dockerd[1677]: time="2025-01-13T22:00:03.888148082Z" level=info msg="API listen on /run/docker.sock" Jan 13 22:00:03.888667 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 22:00:05.586433 containerd[1461]: time="2025-01-13T22:00:05.586380320Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 22:00:06.350154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1265088272.mount: Deactivated successfully. Jan 13 22:00:08.425251 containerd[1461]: time="2025-01-13T22:00:08.425183359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:08.426640 containerd[1461]: time="2025-01-13T22:00:08.426423495Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Jan 13 22:00:08.427972 containerd[1461]: time="2025-01-13T22:00:08.427916296Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:08.431139 containerd[1461]: time="2025-01-13T22:00:08.431095663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:08.432621 containerd[1461]: time="2025-01-13T22:00:08.432263075Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.845842605s" Jan 13 22:00:08.432621 containerd[1461]: time="2025-01-13T22:00:08.432309031Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 22:00:08.454782 containerd[1461]: time="2025-01-13T22:00:08.454752753Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 22:00:10.748372 containerd[1461]: time="2025-01-13T22:00:10.748107158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:10.749769 containerd[1461]: time="2025-01-13T22:00:10.749716783Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Jan 13 22:00:10.751253 containerd[1461]: time="2025-01-13T22:00:10.751208386Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:10.754771 containerd[1461]: time="2025-01-13T22:00:10.754729715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:10.755981 containerd[1461]: time="2025-01-13T22:00:10.755867633Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.300972731s" Jan 13 22:00:10.755981 containerd[1461]: time="2025-01-13T22:00:10.755899247Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 22:00:10.780532 containerd[1461]: time="2025-01-13T22:00:10.780327804Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 22:00:12.323576 containerd[1461]: time="2025-01-13T22:00:12.323342578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:12.324917 containerd[1461]: time="2025-01-13T22:00:12.324859393Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Jan 13 22:00:12.326325 containerd[1461]: time="2025-01-13T22:00:12.326265024Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:12.329834 containerd[1461]: time="2025-01-13T22:00:12.329765932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:12.331623 containerd[1461]: time="2025-01-13T22:00:12.330998940Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.550634811s" Jan 13 22:00:12.331623 containerd[1461]: time="2025-01-13T22:00:12.331043124Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 22:00:12.355542 containerd[1461]: time="2025-01-13T22:00:12.355486436Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 22:00:13.689999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2524440394.mount: Deactivated successfully. Jan 13 22:00:13.691900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 22:00:13.699230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:00:13.802880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:00:13.806888 (kubelet)[1924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:00:14.057624 kubelet[1924]: E0113 22:00:14.057545 1924 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:00:14.063073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:00:14.063403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:00:14.735772 containerd[1461]: time="2025-01-13T22:00:14.735625752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:15.166454 containerd[1461]: time="2025-01-13T22:00:15.166282688Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Jan 13 22:00:15.169011 containerd[1461]: time="2025-01-13T22:00:15.168858607Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:15.174918 containerd[1461]: time="2025-01-13T22:00:15.174802499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:15.176775 containerd[1461]: time="2025-01-13T22:00:15.176460502Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.82092384s" Jan 13 22:00:15.176775 containerd[1461]: time="2025-01-13T22:00:15.176545601Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 22:00:15.229429 containerd[1461]: time="2025-01-13T22:00:15.229265770Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 22:00:15.883729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1709345042.mount: Deactivated successfully. Jan 13 22:00:17.013945 containerd[1461]: time="2025-01-13T22:00:17.013906033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:17.015666 containerd[1461]: time="2025-01-13T22:00:17.015630406Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 13 22:00:17.017251 containerd[1461]: time="2025-01-13T22:00:17.017208259Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:17.020929 containerd[1461]: time="2025-01-13T22:00:17.020881916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:17.022167 containerd[1461]: time="2025-01-13T22:00:17.022124996Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.79279589s" Jan 13 22:00:17.022215 containerd[1461]: time="2025-01-13T22:00:17.022167488Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 22:00:17.046195 containerd[1461]: time="2025-01-13T22:00:17.046148688Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 22:00:17.622252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount240293215.mount: Deactivated successfully. Jan 13 22:00:17.636662 containerd[1461]: time="2025-01-13T22:00:17.636357399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:17.638915 containerd[1461]: time="2025-01-13T22:00:17.638835389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 13 22:00:17.640855 containerd[1461]: time="2025-01-13T22:00:17.640655773Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:17.647984 containerd[1461]: time="2025-01-13T22:00:17.647850612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:17.650319 containerd[1461]: time="2025-01-13T22:00:17.650041504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 603.846585ms" Jan 13 22:00:17.650319 containerd[1461]: time="2025-01-13T22:00:17.650114899Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 22:00:17.705027 containerd[1461]: time="2025-01-13T22:00:17.704519565Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 22:00:18.411149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3039679270.mount: Deactivated successfully. Jan 13 22:00:21.053818 containerd[1461]: time="2025-01-13T22:00:21.053734199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:21.055528 containerd[1461]: time="2025-01-13T22:00:21.055447559Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 13 22:00:21.057631 containerd[1461]: time="2025-01-13T22:00:21.057578939Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:21.067488 containerd[1461]: time="2025-01-13T22:00:21.067426410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:21.069492 containerd[1461]: time="2025-01-13T22:00:21.069425477Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.364832007s" Jan 13 22:00:21.069727 containerd[1461]: time="2025-01-13T22:00:21.069628533Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 22:00:21.550764 update_engine[1441]: I20250113 22:00:21.550196 1441 update_attempter.cc:509] Updating boot flags... Jan 13 22:00:21.600904 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2059) Jan 13 22:00:21.671732 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2059) Jan 13 22:00:21.725699 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2059) Jan 13 22:00:24.243421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 22:00:24.253126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:00:24.543851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:00:24.548489 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:00:24.600694 kubelet[2123]: E0113 22:00:24.598111 2123 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:00:24.600517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:00:24.600647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:00:25.343232 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:00:25.348942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:00:25.377996 systemd[1]: Reloading requested from client PID 2137 ('systemctl') (unit session-9.scope)... Jan 13 22:00:25.378129 systemd[1]: Reloading... Jan 13 22:00:25.488765 zram_generator::config[2176]: No configuration found. Jan 13 22:00:25.770814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:00:25.857457 systemd[1]: Reloading finished in 478 ms. Jan 13 22:00:26.329154 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 22:00:26.329336 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 22:00:26.329873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:00:26.340318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:00:26.509003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:00:26.509012 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 22:00:26.706030 kubelet[2242]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:00:26.706030 kubelet[2242]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 22:00:26.706030 kubelet[2242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:00:26.706030 kubelet[2242]: I0113 22:00:26.704552 2242 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 22:00:27.301936 kubelet[2242]: I0113 22:00:27.301866 2242 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 22:00:27.301936 kubelet[2242]: I0113 22:00:27.301896 2242 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 22:00:27.302198 kubelet[2242]: I0113 22:00:27.302098 2242 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 22:00:27.323192 kubelet[2242]: E0113 22:00:27.323139 2242 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:27.326528 kubelet[2242]: I0113 22:00:27.326407 2242 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 22:00:27.357442 kubelet[2242]: I0113 22:00:27.357376 2242 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 22:00:27.359397 kubelet[2242]: I0113 22:00:27.359352 2242 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 22:00:27.359893 kubelet[2242]: I0113 22:00:27.359845 2242 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 22:00:27.359992 kubelet[2242]: I0113 22:00:27.359909 2242 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 22:00:27.359992 kubelet[2242]: I0113 22:00:27.359937 2242 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 22:00:27.360179 kubelet[2242]: I0113 22:00:27.360137 2242 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:00:27.360385 kubelet[2242]: I0113 22:00:27.360355 2242 kubelet.go:396] "Attempting to sync node with API server" Jan 13 22:00:27.360436 kubelet[2242]: I0113 22:00:27.360398 2242 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 22:00:27.362694 kubelet[2242]: I0113 22:00:27.360458 2242 kubelet.go:312] "Adding apiserver pod source" Jan 13 22:00:27.362694 kubelet[2242]: I0113 22:00:27.360496 2242 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 22:00:27.363578 kubelet[2242]: W0113 22:00:27.363496 2242 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-f00902ecfa.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:27.363628 kubelet[2242]: E0113 22:00:27.363612 2242 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-f00902ecfa.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:27.363877 kubelet[2242]: W0113 22:00:27.363811 2242 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:27.363942 kubelet[2242]: E0113 22:00:27.363896 2242 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:27.364060 kubelet[2242]: I0113 22:00:27.364032 2242 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 22:00:27.374206 kubelet[2242]: I0113 22:00:27.374159 2242 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 22:00:27.377116 kubelet[2242]: W0113 22:00:27.377073 2242 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 22:00:27.378356 kubelet[2242]: I0113 22:00:27.378239 2242 server.go:1256] "Started kubelet" Jan 13 22:00:27.381065 kubelet[2242]: I0113 22:00:27.380951 2242 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 22:00:27.388399 kubelet[2242]: E0113 22:00:27.388107 2242 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.53:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-2-f00902ecfa.novalocal.181a5f841fc6a278 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-f00902ecfa.novalocal,UID:ci-4081-3-0-2-f00902ecfa.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-f00902ecfa.novalocal,},FirstTimestamp:2025-01-13 22:00:27.378180728 +0000 UTC m=+0.865662164,LastTimestamp:2025-01-13 22:00:27.378180728 +0000 UTC m=+0.865662164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-f00902ecfa.novalocal,}" Jan 13 22:00:27.388399 kubelet[2242]: I0113 22:00:27.388233 2242 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 22:00:27.390731 kubelet[2242]: I0113 22:00:27.389515 2242 server.go:461] "Adding debug handlers to kubelet server" Jan 13 22:00:27.390731 kubelet[2242]: I0113 22:00:27.390470 2242 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 22:00:27.390731 kubelet[2242]: I0113 22:00:27.390621 2242 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 22:00:27.391487 kubelet[2242]: I0113 22:00:27.391429 2242 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 22:00:27.393552 kubelet[2242]: E0113 22:00:27.393438 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-f00902ecfa.novalocal?timeout=10s\": dial tcp 172.24.4.53:6443: connect: connection refused" interval="200ms" Jan 13 22:00:27.394503 kubelet[2242]: I0113 22:00:27.393830 2242 factory.go:221] Registration of the systemd container factory successfully Jan 13 22:00:27.394503 kubelet[2242]: I0113 22:00:27.393892 2242 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 22:00:27.394503 kubelet[2242]: I0113 22:00:27.394166 2242 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 22:00:27.394503 kubelet[2242]: W0113 22:00:27.394423 2242 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:27.394503 kubelet[2242]: E0113 22:00:27.394459 2242 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:27.394920 kubelet[2242]: I0113 22:00:27.394790 2242 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 22:00:27.395495 kubelet[2242]: E0113 22:00:27.395461 2242 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 22:00:27.395612 kubelet[2242]: I0113 22:00:27.395562 2242 factory.go:221] Registration of the containerd container factory successfully Jan 13 22:00:27.410759 kubelet[2242]: I0113 22:00:27.410667 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 22:00:27.411540 kubelet[2242]: I0113 22:00:27.411493 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 22:00:27.411540 kubelet[2242]: I0113 22:00:27.411520 2242 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 22:00:27.411540 kubelet[2242]: I0113 22:00:27.411540 2242 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 22:00:27.411788 kubelet[2242]: E0113 22:00:27.411581 2242 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 22:00:27.418485 kubelet[2242]: W0113 22:00:27.418116 2242 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:27.418485 kubelet[2242]: E0113 22:00:27.418363 2242 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:27.438056 kubelet[2242]: I0113 22:00:27.437974 2242 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 22:00:27.438056 kubelet[2242]: I0113 22:00:27.437998 2242 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 22:00:27.438056 kubelet[2242]: I0113 22:00:27.438013 2242 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:00:27.443038 kubelet[2242]: I0113 22:00:27.442975 2242 policy_none.go:49] "None policy: Start" Jan 13 22:00:27.443754 kubelet[2242]: I0113 22:00:27.443721 2242 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 22:00:27.443754 kubelet[2242]: I0113 22:00:27.443755 2242 state_mem.go:35] "Initializing new in-memory state store" Jan 13 22:00:27.451630 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 22:00:27.460554 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 22:00:27.465064 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 22:00:27.474029 kubelet[2242]: I0113 22:00:27.473975 2242 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 22:00:27.474224 kubelet[2242]: I0113 22:00:27.474184 2242 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 22:00:27.477331 kubelet[2242]: E0113 22:00:27.477295 2242 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-2-f00902ecfa.novalocal\" not found" Jan 13 22:00:27.494702 kubelet[2242]: I0113 22:00:27.494647 2242 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.495313 kubelet[2242]: E0113 22:00:27.495287 2242 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.53:6443/api/v1/nodes\": dial tcp 172.24.4.53:6443: connect: connection refused" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.512653 kubelet[2242]: I0113 22:00:27.512456 2242 topology_manager.go:215] "Topology Admit Handler" podUID="0943fb0bac39f1dbc2c784259dd2055c" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.514394 kubelet[2242]: I0113 22:00:27.514382 2242 topology_manager.go:215] "Topology Admit Handler" podUID="7c6d4a0afeabc6766088ed4902ca4fec" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.515804 kubelet[2242]: I0113 22:00:27.515790 2242 topology_manager.go:215] "Topology Admit Handler" podUID="6510a4351eb65e275d7578985c807352" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.525904 systemd[1]: Created slice kubepods-burstable-pod0943fb0bac39f1dbc2c784259dd2055c.slice - libcontainer container kubepods-burstable-pod0943fb0bac39f1dbc2c784259dd2055c.slice. Jan 13 22:00:27.541393 systemd[1]: Created slice kubepods-burstable-pod7c6d4a0afeabc6766088ed4902ca4fec.slice - libcontainer container kubepods-burstable-pod7c6d4a0afeabc6766088ed4902ca4fec.slice. Jan 13 22:00:27.547024 systemd[1]: Created slice kubepods-burstable-pod6510a4351eb65e275d7578985c807352.slice - libcontainer container kubepods-burstable-pod6510a4351eb65e275d7578985c807352.slice. Jan 13 22:00:27.594776 kubelet[2242]: E0113 22:00:27.594607 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-f00902ecfa.novalocal?timeout=10s\": dial tcp 172.24.4.53:6443: connect: connection refused" interval="400ms" Jan 13 22:00:27.596784 kubelet[2242]: I0113 22:00:27.596480 2242 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0943fb0bac39f1dbc2c784259dd2055c-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"0943fb0bac39f1dbc2c784259dd2055c\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.597860 kubelet[2242]: I0113 22:00:27.597004 2242 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0943fb0bac39f1dbc2c784259dd2055c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"0943fb0bac39f1dbc2c784259dd2055c\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.597860 kubelet[2242]: I0113 22:00:27.597064 2242 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c6d4a0afeabc6766088ed4902ca4fec-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"7c6d4a0afeabc6766088ed4902ca4fec\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.597860 kubelet[2242]: I0113 22:00:27.597125 2242 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0943fb0bac39f1dbc2c784259dd2055c-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"0943fb0bac39f1dbc2c784259dd2055c\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.597860 kubelet[2242]: I0113 22:00:27.597171 2242 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c6d4a0afeabc6766088ed4902ca4fec-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"7c6d4a0afeabc6766088ed4902ca4fec\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.598312 kubelet[2242]: I0113 22:00:27.597214 2242 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c6d4a0afeabc6766088ed4902ca4fec-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"7c6d4a0afeabc6766088ed4902ca4fec\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.598312 kubelet[2242]: I0113 22:00:27.597256 2242 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c6d4a0afeabc6766088ed4902ca4fec-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"7c6d4a0afeabc6766088ed4902ca4fec\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.598312 kubelet[2242]: I0113 22:00:27.597304 2242 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c6d4a0afeabc6766088ed4902ca4fec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"7c6d4a0afeabc6766088ed4902ca4fec\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.598312 kubelet[2242]: I0113 22:00:27.597347 2242 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6510a4351eb65e275d7578985c807352-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"6510a4351eb65e275d7578985c807352\") " pod="kube-system/kube-scheduler-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.699658 kubelet[2242]: I0113 22:00:27.698566 2242 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.699658 kubelet[2242]: E0113 22:00:27.699173 2242 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.53:6443/api/v1/nodes\": dial tcp 172.24.4.53:6443: connect: connection refused" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:27.841287 containerd[1461]: time="2025-01-13T22:00:27.841178489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal,Uid:0943fb0bac39f1dbc2c784259dd2055c,Namespace:kube-system,Attempt:0,}" Jan 13 22:00:27.846549 containerd[1461]: time="2025-01-13T22:00:27.846201541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal,Uid:7c6d4a0afeabc6766088ed4902ca4fec,Namespace:kube-system,Attempt:0,}" Jan 13 22:00:27.851165 containerd[1461]: time="2025-01-13T22:00:27.850761548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-2-f00902ecfa.novalocal,Uid:6510a4351eb65e275d7578985c807352,Namespace:kube-system,Attempt:0,}" Jan 13 22:00:27.996753 kubelet[2242]: E0113 22:00:27.996353 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-f00902ecfa.novalocal?timeout=10s\": dial tcp 172.24.4.53:6443: connect: connection refused" interval="800ms" Jan 13 22:00:28.103125 kubelet[2242]: I0113 22:00:28.102956 2242 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:28.103591 kubelet[2242]: E0113 22:00:28.103499 2242 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.53:6443/api/v1/nodes\": dial tcp 172.24.4.53:6443: connect: connection refused" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:28.181426 kubelet[2242]: W0113 22:00:28.181311 2242 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:28.181426 kubelet[2242]: E0113 22:00:28.181432 2242 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:28.448993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556361398.mount: Deactivated successfully. Jan 13 22:00:28.460981 containerd[1461]: time="2025-01-13T22:00:28.460874390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:00:28.466188 containerd[1461]: time="2025-01-13T22:00:28.466102327Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 22:00:28.467882 containerd[1461]: time="2025-01-13T22:00:28.467619031Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:00:28.469876 containerd[1461]: time="2025-01-13T22:00:28.469779415Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:00:28.471888 containerd[1461]: time="2025-01-13T22:00:28.471758045Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:00:28.473395 containerd[1461]: time="2025-01-13T22:00:28.473262853Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 22:00:28.474711 containerd[1461]: time="2025-01-13T22:00:28.474477719Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 22:00:28.484774 containerd[1461]: time="2025-01-13T22:00:28.484521114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:00:28.490608 containerd[1461]: time="2025-01-13T22:00:28.489795793Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 643.461591ms" Jan 13 22:00:28.493471 containerd[1461]: time="2025-01-13T22:00:28.493413853Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 652.090086ms" Jan 13 22:00:28.495174 containerd[1461]: time="2025-01-13T22:00:28.495093592Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 644.208323ms" Jan 13 22:00:28.701567 kubelet[2242]: W0113 22:00:28.701346 2242 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:28.701567 kubelet[2242]: E0113 22:00:28.701475 2242 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:28.723038 containerd[1461]: time="2025-01-13T22:00:28.722862410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:00:28.723038 containerd[1461]: time="2025-01-13T22:00:28.722917129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:00:28.723038 containerd[1461]: time="2025-01-13T22:00:28.722935650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:00:28.723357 containerd[1461]: time="2025-01-13T22:00:28.723210799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:00:28.724453 containerd[1461]: time="2025-01-13T22:00:28.724356053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:00:28.725863 containerd[1461]: time="2025-01-13T22:00:28.724569438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:00:28.725863 containerd[1461]: time="2025-01-13T22:00:28.725209061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:00:28.725863 containerd[1461]: time="2025-01-13T22:00:28.725298166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:00:28.725863 containerd[1461]: time="2025-01-13T22:00:28.725134128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:00:28.725863 containerd[1461]: time="2025-01-13T22:00:28.725480663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:00:28.725863 containerd[1461]: time="2025-01-13T22:00:28.725636362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:00:28.726790 containerd[1461]: time="2025-01-13T22:00:28.725844264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:00:28.753847 systemd[1]: Started cri-containerd-bb757f1715e501f0ba811e78bddf5a384395e61fbbfb0a5069a794adf031b4a9.scope - libcontainer container bb757f1715e501f0ba811e78bddf5a384395e61fbbfb0a5069a794adf031b4a9. Jan 13 22:00:28.759453 systemd[1]: Started cri-containerd-0b37f5481a29491f0ef3d05eb098f802d7874fbb8e8cd149fffc3c1280592e14.scope - libcontainer container 0b37f5481a29491f0ef3d05eb098f802d7874fbb8e8cd149fffc3c1280592e14. Jan 13 22:00:28.760928 systemd[1]: Started cri-containerd-f8074302b093455bc344900fcfd0d1c6e8b0a618b03d178da1c53ba159de2512.scope - libcontainer container f8074302b093455bc344900fcfd0d1c6e8b0a618b03d178da1c53ba159de2512. Jan 13 22:00:28.797566 kubelet[2242]: E0113 22:00:28.797530 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-f00902ecfa.novalocal?timeout=10s\": dial tcp 172.24.4.53:6443: connect: connection refused" interval="1.6s" Jan 13 22:00:28.819587 containerd[1461]: time="2025-01-13T22:00:28.819523415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-2-f00902ecfa.novalocal,Uid:6510a4351eb65e275d7578985c807352,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b37f5481a29491f0ef3d05eb098f802d7874fbb8e8cd149fffc3c1280592e14\"" Jan 13 22:00:28.830048 containerd[1461]: time="2025-01-13T22:00:28.829972755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal,Uid:0943fb0bac39f1dbc2c784259dd2055c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8074302b093455bc344900fcfd0d1c6e8b0a618b03d178da1c53ba159de2512\"" Jan 13 22:00:28.831973 containerd[1461]: time="2025-01-13T22:00:28.831774700Z" level=info msg="CreateContainer within sandbox \"0b37f5481a29491f0ef3d05eb098f802d7874fbb8e8cd149fffc3c1280592e14\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 22:00:28.835325 containerd[1461]: time="2025-01-13T22:00:28.835133345Z" level=info msg="CreateContainer within sandbox \"f8074302b093455bc344900fcfd0d1c6e8b0a618b03d178da1c53ba159de2512\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 22:00:28.838691 containerd[1461]: time="2025-01-13T22:00:28.838644893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal,Uid:7c6d4a0afeabc6766088ed4902ca4fec,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb757f1715e501f0ba811e78bddf5a384395e61fbbfb0a5069a794adf031b4a9\"" Jan 13 22:00:28.841716 containerd[1461]: time="2025-01-13T22:00:28.841640247Z" level=info msg="CreateContainer within sandbox \"bb757f1715e501f0ba811e78bddf5a384395e61fbbfb0a5069a794adf031b4a9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 22:00:28.860954 containerd[1461]: time="2025-01-13T22:00:28.860913115Z" level=info msg="CreateContainer within sandbox \"0b37f5481a29491f0ef3d05eb098f802d7874fbb8e8cd149fffc3c1280592e14\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"677b931b94155b3e6549c367e6266bb1edbfb533b14e91d7322fcb4051d04c07\"" Jan 13 22:00:28.861721 containerd[1461]: time="2025-01-13T22:00:28.861465658Z" level=info msg="StartContainer for \"677b931b94155b3e6549c367e6266bb1edbfb533b14e91d7322fcb4051d04c07\"" Jan 13 22:00:28.877377 containerd[1461]: time="2025-01-13T22:00:28.876861753Z" level=info msg="CreateContainer within sandbox \"f8074302b093455bc344900fcfd0d1c6e8b0a618b03d178da1c53ba159de2512\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1c639275502daf31d0b8cd2659f74791c9d8ddf80bdf1792f5350d9889690686\"" Jan 13 22:00:28.877782 containerd[1461]: time="2025-01-13T22:00:28.877751410Z" level=info msg="StartContainer for \"1c639275502daf31d0b8cd2659f74791c9d8ddf80bdf1792f5350d9889690686\"" Jan 13 22:00:28.884089 containerd[1461]: time="2025-01-13T22:00:28.884046321Z" level=info msg="CreateContainer within sandbox \"bb757f1715e501f0ba811e78bddf5a384395e61fbbfb0a5069a794adf031b4a9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61bf4e69e04d28f39115effbd5391d993d2b47f8c26739b1142807e7265d2852\"" Jan 13 22:00:28.884930 containerd[1461]: time="2025-01-13T22:00:28.884901915Z" level=info msg="StartContainer for \"61bf4e69e04d28f39115effbd5391d993d2b47f8c26739b1142807e7265d2852\"" Jan 13 22:00:28.893905 systemd[1]: Started cri-containerd-677b931b94155b3e6549c367e6266bb1edbfb533b14e91d7322fcb4051d04c07.scope - libcontainer container 677b931b94155b3e6549c367e6266bb1edbfb533b14e91d7322fcb4051d04c07. Jan 13 22:00:28.906780 kubelet[2242]: W0113 22:00:28.906491 2242 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-f00902ecfa.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:28.906780 kubelet[2242]: E0113 22:00:28.906571 2242 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-f00902ecfa.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:28.908102 kubelet[2242]: I0113 22:00:28.907980 2242 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:28.908249 kubelet[2242]: E0113 22:00:28.908200 2242 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.53:6443/api/v1/nodes\": dial tcp 172.24.4.53:6443: connect: connection refused" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:28.925824 systemd[1]: Started cri-containerd-1c639275502daf31d0b8cd2659f74791c9d8ddf80bdf1792f5350d9889690686.scope - libcontainer container 1c639275502daf31d0b8cd2659f74791c9d8ddf80bdf1792f5350d9889690686. Jan 13 22:00:28.930783 systemd[1]: Started cri-containerd-61bf4e69e04d28f39115effbd5391d993d2b47f8c26739b1142807e7265d2852.scope - libcontainer container 61bf4e69e04d28f39115effbd5391d993d2b47f8c26739b1142807e7265d2852. Jan 13 22:00:28.962911 containerd[1461]: time="2025-01-13T22:00:28.962815306Z" level=info msg="StartContainer for \"677b931b94155b3e6549c367e6266bb1edbfb533b14e91d7322fcb4051d04c07\" returns successfully" Jan 13 22:00:28.999254 containerd[1461]: time="2025-01-13T22:00:28.999205146Z" level=info msg="StartContainer for \"1c639275502daf31d0b8cd2659f74791c9d8ddf80bdf1792f5350d9889690686\" returns successfully" Jan 13 22:00:29.006047 containerd[1461]: time="2025-01-13T22:00:29.005997299Z" level=info msg="StartContainer for \"61bf4e69e04d28f39115effbd5391d993d2b47f8c26739b1142807e7265d2852\" returns successfully" Jan 13 22:00:29.006626 kubelet[2242]: W0113 22:00:29.006574 2242 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:29.008057 kubelet[2242]: E0113 22:00:29.008001 2242 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.53:6443: connect: connection refused Jan 13 22:00:30.510759 kubelet[2242]: I0113 22:00:30.510342 2242 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:31.205117 kubelet[2242]: E0113 22:00:31.205093 2242 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-2-f00902ecfa.novalocal\" not found" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:31.259704 kubelet[2242]: I0113 22:00:31.259277 2242 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:31.272028 kubelet[2242]: E0113 22:00:31.271985 2242 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-2-f00902ecfa.novalocal\" not found" Jan 13 22:00:31.372998 kubelet[2242]: E0113 22:00:31.372959 2242 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-2-f00902ecfa.novalocal\" not found" Jan 13 22:00:31.473215 kubelet[2242]: E0113 22:00:31.473102 2242 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-2-f00902ecfa.novalocal\" not found" Jan 13 22:00:31.574310 kubelet[2242]: E0113 22:00:31.574250 2242 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-2-f00902ecfa.novalocal\" not found" Jan 13 22:00:31.675184 kubelet[2242]: E0113 22:00:31.675096 2242 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-2-f00902ecfa.novalocal\" not found" Jan 13 22:00:31.776309 kubelet[2242]: E0113 22:00:31.776247 2242 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-2-f00902ecfa.novalocal\" not found" Jan 13 22:00:31.876608 kubelet[2242]: E0113 22:00:31.876540 2242 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-2-f00902ecfa.novalocal\" not found" Jan 13 22:00:31.977387 kubelet[2242]: E0113 22:00:31.977339 2242 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-2-f00902ecfa.novalocal\" not found" Jan 13 22:00:32.364873 kubelet[2242]: I0113 22:00:32.364795 2242 apiserver.go:52] "Watching apiserver" Jan 13 22:00:32.395802 kubelet[2242]: I0113 22:00:32.395546 2242 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 22:00:33.935545 kubelet[2242]: W0113 22:00:33.935459 2242 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:00:34.427564 systemd[1]: Reloading requested from client PID 2511 ('systemctl') (unit session-9.scope)... Jan 13 22:00:34.427604 systemd[1]: Reloading... Jan 13 22:00:34.534705 zram_generator::config[2550]: No configuration found. Jan 13 22:00:34.683465 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:00:34.782444 systemd[1]: Reloading finished in 354 ms. Jan 13 22:00:34.821989 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:00:34.839971 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 22:00:34.840193 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:00:34.840250 systemd[1]: kubelet.service: Consumed 1.290s CPU time, 111.8M memory peak, 0B memory swap peak. Jan 13 22:00:34.845253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:00:35.076996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:00:35.077018 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 22:00:35.161856 kubelet[2614]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:00:35.161856 kubelet[2614]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 22:00:35.161856 kubelet[2614]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:00:35.162205 kubelet[2614]: I0113 22:00:35.161907 2614 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 22:00:35.166128 kubelet[2614]: I0113 22:00:35.166103 2614 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 22:00:35.166712 kubelet[2614]: I0113 22:00:35.166261 2614 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 22:00:35.166712 kubelet[2614]: I0113 22:00:35.166547 2614 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 22:00:35.169438 kubelet[2614]: I0113 22:00:35.169389 2614 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 22:00:35.175444 kubelet[2614]: I0113 22:00:35.175400 2614 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 22:00:35.189124 kubelet[2614]: I0113 22:00:35.189100 2614 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 22:00:35.189493 kubelet[2614]: I0113 22:00:35.189481 2614 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 22:00:35.189878 kubelet[2614]: I0113 22:00:35.189735 2614 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 22:00:35.189878 kubelet[2614]: I0113 22:00:35.189764 2614 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 22:00:35.189878 kubelet[2614]: I0113 22:00:35.189777 2614 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 22:00:35.189878 kubelet[2614]: I0113 22:00:35.189804 2614 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:00:35.190542 kubelet[2614]: I0113 22:00:35.190495 2614 kubelet.go:396] "Attempting to sync node with API server" Jan 13 22:00:35.190542 kubelet[2614]: I0113 22:00:35.190520 2614 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 22:00:35.191912 kubelet[2614]: I0113 22:00:35.191734 2614 kubelet.go:312] "Adding apiserver pod source" Jan 13 22:00:35.191912 kubelet[2614]: I0113 22:00:35.191756 2614 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 22:00:35.197137 kubelet[2614]: I0113 22:00:35.197107 2614 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 22:00:35.197747 kubelet[2614]: I0113 22:00:35.197308 2614 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 22:00:35.198761 kubelet[2614]: I0113 22:00:35.198738 2614 server.go:1256] "Started kubelet" Jan 13 22:00:35.220007 kubelet[2614]: I0113 22:00:35.219628 2614 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 22:00:35.221836 kubelet[2614]: I0113 22:00:35.221804 2614 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 22:00:35.226088 kubelet[2614]: I0113 22:00:35.226053 2614 server.go:461] "Adding debug handlers to kubelet server" Jan 13 22:00:35.235230 kubelet[2614]: I0113 22:00:35.235189 2614 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 22:00:35.236558 kubelet[2614]: I0113 22:00:35.236529 2614 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 22:00:35.237703 kubelet[2614]: I0113 22:00:35.236818 2614 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 22:00:35.243998 kubelet[2614]: I0113 22:00:35.243961 2614 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 22:00:35.244188 kubelet[2614]: I0113 22:00:35.244152 2614 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 22:00:35.248041 kubelet[2614]: I0113 22:00:35.246994 2614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 22:00:35.251297 kubelet[2614]: I0113 22:00:35.251266 2614 factory.go:221] Registration of the systemd container factory successfully Jan 13 22:00:35.251405 kubelet[2614]: I0113 22:00:35.251371 2614 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 22:00:35.267487 kubelet[2614]: I0113 22:00:35.266340 2614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 22:00:35.267487 kubelet[2614]: I0113 22:00:35.266378 2614 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 22:00:35.267487 kubelet[2614]: I0113 22:00:35.266397 2614 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 22:00:35.267487 kubelet[2614]: E0113 22:00:35.266467 2614 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 22:00:35.274409 kubelet[2614]: I0113 22:00:35.274378 2614 factory.go:221] Registration of the containerd container factory successfully Jan 13 22:00:35.306947 kubelet[2614]: E0113 22:00:35.306800 2614 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 22:00:35.343126 kubelet[2614]: I0113 22:00:35.341733 2614 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.360354 kubelet[2614]: I0113 22:00:35.360144 2614 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.360354 kubelet[2614]: I0113 22:00:35.360216 2614 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.366702 kubelet[2614]: E0113 22:00:35.366617 2614 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 22:00:35.384516 kubelet[2614]: I0113 22:00:35.384235 2614 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 22:00:35.384516 kubelet[2614]: I0113 22:00:35.384255 2614 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 22:00:35.384516 kubelet[2614]: I0113 22:00:35.384283 2614 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:00:35.384516 kubelet[2614]: I0113 22:00:35.384432 2614 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 22:00:35.384516 kubelet[2614]: I0113 22:00:35.384453 2614 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 22:00:35.384516 kubelet[2614]: I0113 22:00:35.384460 2614 policy_none.go:49] "None policy: Start" Jan 13 22:00:35.386707 kubelet[2614]: I0113 22:00:35.385984 2614 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 22:00:35.386707 kubelet[2614]: I0113 22:00:35.386009 2614 state_mem.go:35] "Initializing new in-memory state store" Jan 13 22:00:35.386707 kubelet[2614]: I0113 22:00:35.386128 2614 state_mem.go:75] "Updated machine memory state" Jan 13 22:00:35.392381 kubelet[2614]: I0113 22:00:35.392363 2614 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 22:00:35.392904 kubelet[2614]: I0113 22:00:35.392880 2614 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 22:00:35.570892 kubelet[2614]: I0113 22:00:35.570850 2614 topology_manager.go:215] "Topology Admit Handler" podUID="7c6d4a0afeabc6766088ed4902ca4fec" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.571028 kubelet[2614]: I0113 22:00:35.570940 2614 topology_manager.go:215] "Topology Admit Handler" podUID="6510a4351eb65e275d7578985c807352" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.571028 kubelet[2614]: I0113 22:00:35.570979 2614 topology_manager.go:215] "Topology Admit Handler" podUID="0943fb0bac39f1dbc2c784259dd2055c" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.580723 kubelet[2614]: W0113 22:00:35.580523 2614 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:00:35.581889 kubelet[2614]: W0113 22:00:35.581877 2614 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:00:35.582043 kubelet[2614]: E0113 22:00:35.582013 2614 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-0-2-f00902ecfa.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.583541 kubelet[2614]: W0113 22:00:35.583512 2614 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:00:35.647691 kubelet[2614]: I0113 22:00:35.647275 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c6d4a0afeabc6766088ed4902ca4fec-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"7c6d4a0afeabc6766088ed4902ca4fec\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.647691 kubelet[2614]: I0113 22:00:35.647371 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c6d4a0afeabc6766088ed4902ca4fec-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"7c6d4a0afeabc6766088ed4902ca4fec\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.647691 kubelet[2614]: I0113 22:00:35.647417 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0943fb0bac39f1dbc2c784259dd2055c-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"0943fb0bac39f1dbc2c784259dd2055c\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.647691 kubelet[2614]: I0113 22:00:35.647445 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0943fb0bac39f1dbc2c784259dd2055c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"0943fb0bac39f1dbc2c784259dd2055c\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.647927 kubelet[2614]: I0113 22:00:35.647470 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c6d4a0afeabc6766088ed4902ca4fec-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"7c6d4a0afeabc6766088ed4902ca4fec\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.647927 kubelet[2614]: I0113 22:00:35.647495 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c6d4a0afeabc6766088ed4902ca4fec-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"7c6d4a0afeabc6766088ed4902ca4fec\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.647927 kubelet[2614]: I0113 22:00:35.647521 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c6d4a0afeabc6766088ed4902ca4fec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"7c6d4a0afeabc6766088ed4902ca4fec\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.647927 kubelet[2614]: I0113 22:00:35.647558 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6510a4351eb65e275d7578985c807352-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"6510a4351eb65e275d7578985c807352\") " pod="kube-system/kube-scheduler-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:35.648028 kubelet[2614]: I0113 22:00:35.647580 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0943fb0bac39f1dbc2c784259dd2055c-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal\" (UID: \"0943fb0bac39f1dbc2c784259dd2055c\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:36.197928 kubelet[2614]: I0113 22:00:36.197732 2614 apiserver.go:52] "Watching apiserver" Jan 13 22:00:36.245087 kubelet[2614]: I0113 22:00:36.245040 2614 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 22:00:36.331308 kubelet[2614]: W0113 22:00:36.331257 2614 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:00:36.331501 kubelet[2614]: E0113 22:00:36.331350 2614 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-0-2-f00902ecfa.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-2-f00902ecfa.novalocal" Jan 13 22:00:36.375248 kubelet[2614]: I0113 22:00:36.375194 2614 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-2-f00902ecfa.novalocal" podStartSLOduration=3.375123862 podStartE2EDuration="3.375123862s" podCreationTimestamp="2025-01-13 22:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:00:36.374645892 +0000 UTC m=+1.289032961" watchObservedRunningTime="2025-01-13 22:00:36.375123862 +0000 UTC m=+1.289510921" Jan 13 22:00:36.401001 kubelet[2614]: I0113 22:00:36.400361 2614 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-2-f00902ecfa.novalocal" podStartSLOduration=1.400316629 podStartE2EDuration="1.400316629s" podCreationTimestamp="2025-01-13 22:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:00:36.38872843 +0000 UTC m=+1.303115499" watchObservedRunningTime="2025-01-13 22:00:36.400316629 +0000 UTC m=+1.314703698" Jan 13 22:00:36.574289 sudo[1660]: pam_unix(sudo:session): session closed for user root Jan 13 22:00:36.808992 sshd[1657]: pam_unix(sshd:session): session closed for user core Jan 13 22:00:36.817392 systemd[1]: sshd@6-172.24.4.53:22-172.24.4.1:44274.service: Deactivated successfully. Jan 13 22:00:36.821527 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 22:00:36.822083 systemd[1]: session-9.scope: Consumed 6.350s CPU time, 192.4M memory peak, 0B memory swap peak. Jan 13 22:00:36.823603 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Jan 13 22:00:36.826412 systemd-logind[1439]: Removed session 9. Jan 13 22:00:40.781650 kubelet[2614]: I0113 22:00:40.781224 2614 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f00902ecfa.novalocal" podStartSLOduration=5.780797931 podStartE2EDuration="5.780797931s" podCreationTimestamp="2025-01-13 22:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:00:36.400647722 +0000 UTC m=+1.315034791" watchObservedRunningTime="2025-01-13 22:00:40.780797931 +0000 UTC m=+5.695185060" Jan 13 22:00:48.324392 kubelet[2614]: I0113 22:00:48.324368 2614 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 22:00:48.325289 containerd[1461]: time="2025-01-13T22:00:48.324981738Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 22:00:48.325641 kubelet[2614]: I0113 22:00:48.325296 2614 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 22:00:49.185161 kubelet[2614]: I0113 22:00:49.185067 2614 topology_manager.go:215] "Topology Admit Handler" podUID="6301b9ad-4530-4051-b3da-757e7a7684b1" podNamespace="kube-system" podName="kube-proxy-vxnzd" Jan 13 22:00:49.193465 kubelet[2614]: I0113 22:00:49.193369 2614 topology_manager.go:215] "Topology Admit Handler" podUID="bbe08794-6a36-4321-8f34-294848c2c3b5" podNamespace="kube-flannel" podName="kube-flannel-ds-jc6jp" Jan 13 22:00:49.210886 systemd[1]: Created slice kubepods-besteffort-pod6301b9ad_4530_4051_b3da_757e7a7684b1.slice - libcontainer container kubepods-besteffort-pod6301b9ad_4530_4051_b3da_757e7a7684b1.slice. Jan 13 22:00:49.225343 systemd[1]: Created slice kubepods-burstable-podbbe08794_6a36_4321_8f34_294848c2c3b5.slice - libcontainer container kubepods-burstable-podbbe08794_6a36_4321_8f34_294848c2c3b5.slice. Jan 13 22:00:49.236916 kubelet[2614]: I0113 22:00:49.236874 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbe08794-6a36-4321-8f34-294848c2c3b5-xtables-lock\") pod \"kube-flannel-ds-jc6jp\" (UID: \"bbe08794-6a36-4321-8f34-294848c2c3b5\") " pod="kube-flannel/kube-flannel-ds-jc6jp" Jan 13 22:00:49.237417 kubelet[2614]: I0113 22:00:49.236933 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f49b\" (UniqueName: \"kubernetes.io/projected/bbe08794-6a36-4321-8f34-294848c2c3b5-kube-api-access-4f49b\") pod \"kube-flannel-ds-jc6jp\" (UID: \"bbe08794-6a36-4321-8f34-294848c2c3b5\") " pod="kube-flannel/kube-flannel-ds-jc6jp" Jan 13 22:00:49.237417 kubelet[2614]: I0113 22:00:49.236961 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6301b9ad-4530-4051-b3da-757e7a7684b1-xtables-lock\") pod \"kube-proxy-vxnzd\" (UID: \"6301b9ad-4530-4051-b3da-757e7a7684b1\") " pod="kube-system/kube-proxy-vxnzd" Jan 13 22:00:49.237417 kubelet[2614]: I0113 22:00:49.237020 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88q22\" (UniqueName: \"kubernetes.io/projected/6301b9ad-4530-4051-b3da-757e7a7684b1-kube-api-access-88q22\") pod \"kube-proxy-vxnzd\" (UID: \"6301b9ad-4530-4051-b3da-757e7a7684b1\") " pod="kube-system/kube-proxy-vxnzd" Jan 13 22:00:49.237417 kubelet[2614]: I0113 22:00:49.237088 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/bbe08794-6a36-4321-8f34-294848c2c3b5-cni-plugin\") pod \"kube-flannel-ds-jc6jp\" (UID: \"bbe08794-6a36-4321-8f34-294848c2c3b5\") " pod="kube-flannel/kube-flannel-ds-jc6jp" Jan 13 22:00:49.237417 kubelet[2614]: I0113 22:00:49.237134 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/bbe08794-6a36-4321-8f34-294848c2c3b5-cni\") pod \"kube-flannel-ds-jc6jp\" (UID: \"bbe08794-6a36-4321-8f34-294848c2c3b5\") " pod="kube-flannel/kube-flannel-ds-jc6jp" Jan 13 22:00:49.237555 kubelet[2614]: I0113 22:00:49.237213 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/bbe08794-6a36-4321-8f34-294848c2c3b5-flannel-cfg\") pod \"kube-flannel-ds-jc6jp\" (UID: \"bbe08794-6a36-4321-8f34-294848c2c3b5\") " pod="kube-flannel/kube-flannel-ds-jc6jp" Jan 13 22:00:49.237555 kubelet[2614]: I0113 22:00:49.237238 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6301b9ad-4530-4051-b3da-757e7a7684b1-kube-proxy\") pod \"kube-proxy-vxnzd\" (UID: \"6301b9ad-4530-4051-b3da-757e7a7684b1\") " pod="kube-system/kube-proxy-vxnzd" Jan 13 22:00:49.237555 kubelet[2614]: I0113 22:00:49.237260 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6301b9ad-4530-4051-b3da-757e7a7684b1-lib-modules\") pod \"kube-proxy-vxnzd\" (UID: \"6301b9ad-4530-4051-b3da-757e7a7684b1\") " pod="kube-system/kube-proxy-vxnzd" Jan 13 22:00:49.237555 kubelet[2614]: I0113 22:00:49.237311 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bbe08794-6a36-4321-8f34-294848c2c3b5-run\") pod \"kube-flannel-ds-jc6jp\" (UID: \"bbe08794-6a36-4321-8f34-294848c2c3b5\") " pod="kube-flannel/kube-flannel-ds-jc6jp" Jan 13 22:00:49.521435 containerd[1461]: time="2025-01-13T22:00:49.521365157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vxnzd,Uid:6301b9ad-4530-4051-b3da-757e7a7684b1,Namespace:kube-system,Attempt:0,}" Jan 13 22:00:49.532566 containerd[1461]: time="2025-01-13T22:00:49.531929463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jc6jp,Uid:bbe08794-6a36-4321-8f34-294848c2c3b5,Namespace:kube-flannel,Attempt:0,}" Jan 13 22:00:49.626829 containerd[1461]: time="2025-01-13T22:00:49.626306274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:00:49.626829 containerd[1461]: time="2025-01-13T22:00:49.626380163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:00:49.626829 containerd[1461]: time="2025-01-13T22:00:49.626398059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:00:49.626829 containerd[1461]: time="2025-01-13T22:00:49.626479845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:00:49.626829 containerd[1461]: time="2025-01-13T22:00:49.625661872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:00:49.626829 containerd[1461]: time="2025-01-13T22:00:49.626776093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:00:49.626829 containerd[1461]: time="2025-01-13T22:00:49.626802617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:00:49.627621 containerd[1461]: time="2025-01-13T22:00:49.627384563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:00:49.653109 systemd[1]: Started cri-containerd-c3b045962431c58265342c90e6f25be7ce49bafaa6108dc1c0b77acfea13c1e3.scope - libcontainer container c3b045962431c58265342c90e6f25be7ce49bafaa6108dc1c0b77acfea13c1e3. Jan 13 22:00:49.659320 systemd[1]: Started cri-containerd-cf6d9019012fb68d427cf80fa8705f629d5c5548c8867c598f2fa6d8f3a7d84a.scope - libcontainer container cf6d9019012fb68d427cf80fa8705f629d5c5548c8867c598f2fa6d8f3a7d84a. Jan 13 22:00:49.686023 containerd[1461]: time="2025-01-13T22:00:49.685981015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vxnzd,Uid:6301b9ad-4530-4051-b3da-757e7a7684b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3b045962431c58265342c90e6f25be7ce49bafaa6108dc1c0b77acfea13c1e3\"" Jan 13 22:00:49.689767 containerd[1461]: time="2025-01-13T22:00:49.689273047Z" level=info msg="CreateContainer within sandbox \"c3b045962431c58265342c90e6f25be7ce49bafaa6108dc1c0b77acfea13c1e3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 22:00:49.712777 containerd[1461]: time="2025-01-13T22:00:49.712433596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jc6jp,Uid:bbe08794-6a36-4321-8f34-294848c2c3b5,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"cf6d9019012fb68d427cf80fa8705f629d5c5548c8867c598f2fa6d8f3a7d84a\"" Jan 13 22:00:49.717547 containerd[1461]: time="2025-01-13T22:00:49.717067129Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 13 22:00:49.723422 containerd[1461]: time="2025-01-13T22:00:49.723391348Z" level=info msg="CreateContainer within sandbox \"c3b045962431c58265342c90e6f25be7ce49bafaa6108dc1c0b77acfea13c1e3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"825fe4fae4ff827d54d9ff870de4c49ec581a1c86505870bd22b097ac4b91da6\"" Jan 13 22:00:49.724342 containerd[1461]: time="2025-01-13T22:00:49.724315896Z" level=info msg="StartContainer for \"825fe4fae4ff827d54d9ff870de4c49ec581a1c86505870bd22b097ac4b91da6\"" Jan 13 22:00:49.750832 systemd[1]: Started cri-containerd-825fe4fae4ff827d54d9ff870de4c49ec581a1c86505870bd22b097ac4b91da6.scope - libcontainer container 825fe4fae4ff827d54d9ff870de4c49ec581a1c86505870bd22b097ac4b91da6. Jan 13 22:00:49.786756 containerd[1461]: time="2025-01-13T22:00:49.785694479Z" level=info msg="StartContainer for \"825fe4fae4ff827d54d9ff870de4c49ec581a1c86505870bd22b097ac4b91da6\" returns successfully" Jan 13 22:00:50.387928 kubelet[2614]: I0113 22:00:50.387821 2614 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vxnzd" podStartSLOduration=1.387742696 podStartE2EDuration="1.387742696s" podCreationTimestamp="2025-01-13 22:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:00:50.386194803 +0000 UTC m=+15.300581922" watchObservedRunningTime="2025-01-13 22:00:50.387742696 +0000 UTC m=+15.302129815" Jan 13 22:00:51.998232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2816607568.mount: Deactivated successfully. Jan 13 22:00:52.074109 containerd[1461]: time="2025-01-13T22:00:52.074049379Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:52.076265 containerd[1461]: time="2025-01-13T22:00:52.075850812Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 13 22:00:52.078007 containerd[1461]: time="2025-01-13T22:00:52.077944582Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:52.082264 containerd[1461]: time="2025-01-13T22:00:52.082152514Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:52.084230 containerd[1461]: time="2025-01-13T22:00:52.082921450Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.365818319s" Jan 13 22:00:52.084230 containerd[1461]: time="2025-01-13T22:00:52.082957543Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 13 22:00:52.085438 containerd[1461]: time="2025-01-13T22:00:52.085201646Z" level=info msg="CreateContainer within sandbox \"cf6d9019012fb68d427cf80fa8705f629d5c5548c8867c598f2fa6d8f3a7d84a\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 13 22:00:52.121522 containerd[1461]: time="2025-01-13T22:00:52.121444590Z" level=info msg="CreateContainer within sandbox \"cf6d9019012fb68d427cf80fa8705f629d5c5548c8867c598f2fa6d8f3a7d84a\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"936a949bc6aed4e3c2c9af680e108c62226f8304e2bf1a398d83be05fb3e87fe\"" Jan 13 22:00:52.122872 containerd[1461]: time="2025-01-13T22:00:52.122805868Z" level=info msg="StartContainer for \"936a949bc6aed4e3c2c9af680e108c62226f8304e2bf1a398d83be05fb3e87fe\"" Jan 13 22:00:52.158970 systemd[1]: Started cri-containerd-936a949bc6aed4e3c2c9af680e108c62226f8304e2bf1a398d83be05fb3e87fe.scope - libcontainer container 936a949bc6aed4e3c2c9af680e108c62226f8304e2bf1a398d83be05fb3e87fe. Jan 13 22:00:52.191959 systemd[1]: cri-containerd-936a949bc6aed4e3c2c9af680e108c62226f8304e2bf1a398d83be05fb3e87fe.scope: Deactivated successfully. Jan 13 22:00:52.203662 containerd[1461]: time="2025-01-13T22:00:52.203560085Z" level=info msg="StartContainer for \"936a949bc6aed4e3c2c9af680e108c62226f8304e2bf1a398d83be05fb3e87fe\" returns successfully" Jan 13 22:00:52.330972 containerd[1461]: time="2025-01-13T22:00:52.330805156Z" level=info msg="shim disconnected" id=936a949bc6aed4e3c2c9af680e108c62226f8304e2bf1a398d83be05fb3e87fe namespace=k8s.io Jan 13 22:00:52.330972 containerd[1461]: time="2025-01-13T22:00:52.330902231Z" level=warning msg="cleaning up after shim disconnected" id=936a949bc6aed4e3c2c9af680e108c62226f8304e2bf1a398d83be05fb3e87fe namespace=k8s.io Jan 13 22:00:52.330972 containerd[1461]: time="2025-01-13T22:00:52.330925758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:00:52.373380 containerd[1461]: time="2025-01-13T22:00:52.373145252Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 13 22:00:52.864898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-936a949bc6aed4e3c2c9af680e108c62226f8304e2bf1a398d83be05fb3e87fe-rootfs.mount: Deactivated successfully. Jan 13 22:00:54.596786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount770547049.mount: Deactivated successfully. Jan 13 22:00:55.535653 containerd[1461]: time="2025-01-13T22:00:55.535600856Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:55.536746 containerd[1461]: time="2025-01-13T22:00:55.536709916Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 13 22:00:55.537828 containerd[1461]: time="2025-01-13T22:00:55.537796062Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:55.541358 containerd[1461]: time="2025-01-13T22:00:55.541308598Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:00:55.543234 containerd[1461]: time="2025-01-13T22:00:55.542543652Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.169361668s" Jan 13 22:00:55.543234 containerd[1461]: time="2025-01-13T22:00:55.542577409Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 13 22:00:55.545607 containerd[1461]: time="2025-01-13T22:00:55.545574062Z" level=info msg="CreateContainer within sandbox \"cf6d9019012fb68d427cf80fa8705f629d5c5548c8867c598f2fa6d8f3a7d84a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 22:00:55.568439 containerd[1461]: time="2025-01-13T22:00:55.568390214Z" level=info msg="CreateContainer within sandbox \"cf6d9019012fb68d427cf80fa8705f629d5c5548c8867c598f2fa6d8f3a7d84a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d8ecb043b72221caefc6ed318476959976f523c402ffdd0454f2ee2b0523b18e\"" Jan 13 22:00:55.569964 containerd[1461]: time="2025-01-13T22:00:55.569160488Z" level=info msg="StartContainer for \"d8ecb043b72221caefc6ed318476959976f523c402ffdd0454f2ee2b0523b18e\"" Jan 13 22:00:55.601787 systemd[1]: run-containerd-runc-k8s.io-d8ecb043b72221caefc6ed318476959976f523c402ffdd0454f2ee2b0523b18e-runc.0Z5NAj.mount: Deactivated successfully. Jan 13 22:00:55.610829 systemd[1]: Started cri-containerd-d8ecb043b72221caefc6ed318476959976f523c402ffdd0454f2ee2b0523b18e.scope - libcontainer container d8ecb043b72221caefc6ed318476959976f523c402ffdd0454f2ee2b0523b18e. Jan 13 22:00:55.633578 systemd[1]: cri-containerd-d8ecb043b72221caefc6ed318476959976f523c402ffdd0454f2ee2b0523b18e.scope: Deactivated successfully. Jan 13 22:00:55.641597 containerd[1461]: time="2025-01-13T22:00:55.641517925Z" level=info msg="StartContainer for \"d8ecb043b72221caefc6ed318476959976f523c402ffdd0454f2ee2b0523b18e\" returns successfully" Jan 13 22:00:55.675384 kubelet[2614]: I0113 22:00:55.675347 2614 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 22:00:55.769905 kubelet[2614]: I0113 22:00:55.769475 2614 topology_manager.go:215] "Topology Admit Handler" podUID="b40e5874-5773-4bf7-843e-ddf6caa00b08" podNamespace="kube-system" podName="coredns-76f75df574-zjrjb" Jan 13 22:00:55.772817 kubelet[2614]: I0113 22:00:55.772386 2614 topology_manager.go:215] "Topology Admit Handler" podUID="cae0f72e-ee32-4614-be10-c8fd88414007" podNamespace="kube-system" podName="coredns-76f75df574-n59dq" Jan 13 22:00:55.779953 kubelet[2614]: I0113 22:00:55.779808 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4ztn\" (UniqueName: \"kubernetes.io/projected/b40e5874-5773-4bf7-843e-ddf6caa00b08-kube-api-access-q4ztn\") pod \"coredns-76f75df574-zjrjb\" (UID: \"b40e5874-5773-4bf7-843e-ddf6caa00b08\") " pod="kube-system/coredns-76f75df574-zjrjb" Jan 13 22:00:55.780477 kubelet[2614]: I0113 22:00:55.780415 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cae0f72e-ee32-4614-be10-c8fd88414007-config-volume\") pod \"coredns-76f75df574-n59dq\" (UID: \"cae0f72e-ee32-4614-be10-c8fd88414007\") " pod="kube-system/coredns-76f75df574-n59dq" Jan 13 22:00:55.782153 kubelet[2614]: I0113 22:00:55.780783 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcg9g\" (UniqueName: \"kubernetes.io/projected/cae0f72e-ee32-4614-be10-c8fd88414007-kube-api-access-jcg9g\") pod \"coredns-76f75df574-n59dq\" (UID: \"cae0f72e-ee32-4614-be10-c8fd88414007\") " pod="kube-system/coredns-76f75df574-n59dq" Jan 13 22:00:55.782153 kubelet[2614]: I0113 22:00:55.780841 2614 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b40e5874-5773-4bf7-843e-ddf6caa00b08-config-volume\") pod \"coredns-76f75df574-zjrjb\" (UID: \"b40e5874-5773-4bf7-843e-ddf6caa00b08\") " pod="kube-system/coredns-76f75df574-zjrjb" Jan 13 22:00:55.795445 systemd[1]: Created slice kubepods-burstable-podb40e5874_5773_4bf7_843e_ddf6caa00b08.slice - libcontainer container kubepods-burstable-podb40e5874_5773_4bf7_843e_ddf6caa00b08.slice. Jan 13 22:00:55.808513 systemd[1]: Created slice kubepods-burstable-podcae0f72e_ee32_4614_be10_c8fd88414007.slice - libcontainer container kubepods-burstable-podcae0f72e_ee32_4614_be10_c8fd88414007.slice. Jan 13 22:00:55.985177 containerd[1461]: time="2025-01-13T22:00:55.985007297Z" level=info msg="shim disconnected" id=d8ecb043b72221caefc6ed318476959976f523c402ffdd0454f2ee2b0523b18e namespace=k8s.io Jan 13 22:00:55.985177 containerd[1461]: time="2025-01-13T22:00:55.985150683Z" level=warning msg="cleaning up after shim disconnected" id=d8ecb043b72221caefc6ed318476959976f523c402ffdd0454f2ee2b0523b18e namespace=k8s.io Jan 13 22:00:55.985177 containerd[1461]: time="2025-01-13T22:00:55.985178319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:00:56.107495 containerd[1461]: time="2025-01-13T22:00:56.107360200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjrjb,Uid:b40e5874-5773-4bf7-843e-ddf6caa00b08,Namespace:kube-system,Attempt:0,}" Jan 13 22:00:56.112568 containerd[1461]: time="2025-01-13T22:00:56.112506286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n59dq,Uid:cae0f72e-ee32-4614-be10-c8fd88414007,Namespace:kube-system,Attempt:0,}" Jan 13 22:00:56.161967 containerd[1461]: time="2025-01-13T22:00:56.161893627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjrjb,Uid:b40e5874-5773-4bf7-843e-ddf6caa00b08,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e6f0526f0fdebc73eb93ccdc63bfc7f4ca85da1d6e79a1bd68b51b1a41b84b4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 22:00:56.162940 kubelet[2614]: E0113 22:00:56.162232 2614 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6f0526f0fdebc73eb93ccdc63bfc7f4ca85da1d6e79a1bd68b51b1a41b84b4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 22:00:56.162940 kubelet[2614]: E0113 22:00:56.162296 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6f0526f0fdebc73eb93ccdc63bfc7f4ca85da1d6e79a1bd68b51b1a41b84b4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-zjrjb" Jan 13 22:00:56.162940 kubelet[2614]: E0113 22:00:56.162328 2614 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6f0526f0fdebc73eb93ccdc63bfc7f4ca85da1d6e79a1bd68b51b1a41b84b4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-zjrjb" Jan 13 22:00:56.162940 kubelet[2614]: E0113 22:00:56.162393 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zjrjb_kube-system(b40e5874-5773-4bf7-843e-ddf6caa00b08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zjrjb_kube-system(b40e5874-5773-4bf7-843e-ddf6caa00b08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e6f0526f0fdebc73eb93ccdc63bfc7f4ca85da1d6e79a1bd68b51b1a41b84b4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-zjrjb" podUID="b40e5874-5773-4bf7-843e-ddf6caa00b08" Jan 13 22:00:56.174467 containerd[1461]: time="2025-01-13T22:00:56.174391353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n59dq,Uid:cae0f72e-ee32-4614-be10-c8fd88414007,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8cf5f3976818356a22e92a3812fbf7529def55c1f5082fd4d8742f7becf3e5ff\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 22:00:56.175023 kubelet[2614]: E0113 22:00:56.174994 2614 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf5f3976818356a22e92a3812fbf7529def55c1f5082fd4d8742f7becf3e5ff\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 22:00:56.175102 kubelet[2614]: E0113 22:00:56.175074 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf5f3976818356a22e92a3812fbf7529def55c1f5082fd4d8742f7becf3e5ff\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-n59dq" Jan 13 22:00:56.175156 kubelet[2614]: E0113 22:00:56.175135 2614 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf5f3976818356a22e92a3812fbf7529def55c1f5082fd4d8742f7becf3e5ff\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-n59dq" Jan 13 22:00:56.176528 kubelet[2614]: E0113 22:00:56.175236 2614 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-n59dq_kube-system(cae0f72e-ee32-4614-be10-c8fd88414007)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-n59dq_kube-system(cae0f72e-ee32-4614-be10-c8fd88414007)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8cf5f3976818356a22e92a3812fbf7529def55c1f5082fd4d8742f7becf3e5ff\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-n59dq" podUID="cae0f72e-ee32-4614-be10-c8fd88414007" Jan 13 22:00:56.390068 containerd[1461]: time="2025-01-13T22:00:56.389824789Z" level=info msg="CreateContainer within sandbox \"cf6d9019012fb68d427cf80fa8705f629d5c5548c8867c598f2fa6d8f3a7d84a\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 13 22:00:56.420388 containerd[1461]: time="2025-01-13T22:00:56.420283479Z" level=info msg="CreateContainer within sandbox \"cf6d9019012fb68d427cf80fa8705f629d5c5548c8867c598f2fa6d8f3a7d84a\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"c25ca00632dc752dc2c10c7ae1197b69850a9fcae392c4089e5ebf192d0eebfc\"" Jan 13 22:00:56.424741 containerd[1461]: time="2025-01-13T22:00:56.422666277Z" level=info msg="StartContainer for \"c25ca00632dc752dc2c10c7ae1197b69850a9fcae392c4089e5ebf192d0eebfc\"" Jan 13 22:00:56.477023 systemd[1]: Started cri-containerd-c25ca00632dc752dc2c10c7ae1197b69850a9fcae392c4089e5ebf192d0eebfc.scope - libcontainer container c25ca00632dc752dc2c10c7ae1197b69850a9fcae392c4089e5ebf192d0eebfc. Jan 13 22:00:56.524029 containerd[1461]: time="2025-01-13T22:00:56.523977305Z" level=info msg="StartContainer for \"c25ca00632dc752dc2c10c7ae1197b69850a9fcae392c4089e5ebf192d0eebfc\" returns successfully" Jan 13 22:00:56.565334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8ecb043b72221caefc6ed318476959976f523c402ffdd0454f2ee2b0523b18e-rootfs.mount: Deactivated successfully. Jan 13 22:00:57.412861 kubelet[2614]: I0113 22:00:57.412754 2614 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-jc6jp" podStartSLOduration=2.585441 podStartE2EDuration="8.41262698s" podCreationTimestamp="2025-01-13 22:00:49 +0000 UTC" firstStartedPulling="2025-01-13 22:00:49.715772312 +0000 UTC m=+14.630159372" lastFinishedPulling="2025-01-13 22:00:55.542958293 +0000 UTC m=+20.457345352" observedRunningTime="2025-01-13 22:00:57.412437772 +0000 UTC m=+22.326824891" watchObservedRunningTime="2025-01-13 22:00:57.41262698 +0000 UTC m=+22.327014089" Jan 13 22:00:57.624515 systemd-networkd[1375]: flannel.1: Link UP Jan 13 22:00:57.624531 systemd-networkd[1375]: flannel.1: Gained carrier Jan 13 22:00:59.543945 systemd-networkd[1375]: flannel.1: Gained IPv6LL Jan 13 22:01:07.269646 containerd[1461]: time="2025-01-13T22:01:07.268846645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjrjb,Uid:b40e5874-5773-4bf7-843e-ddf6caa00b08,Namespace:kube-system,Attempt:0,}" Jan 13 22:01:07.321230 systemd-networkd[1375]: cni0: Link UP Jan 13 22:01:07.321255 systemd-networkd[1375]: cni0: Gained carrier Jan 13 22:01:07.329265 systemd-networkd[1375]: cni0: Lost carrier Jan 13 22:01:07.344563 systemd-networkd[1375]: veth30060a8c: Link UP Jan 13 22:01:07.349281 kernel: cni0: port 1(veth30060a8c) entered blocking state Jan 13 22:01:07.349517 kernel: cni0: port 1(veth30060a8c) entered disabled state Jan 13 22:01:07.350395 kernel: veth30060a8c: entered allmulticast mode Jan 13 22:01:07.354845 kernel: veth30060a8c: entered promiscuous mode Jan 13 22:01:07.362794 kernel: cni0: port 1(veth30060a8c) entered blocking state Jan 13 22:01:07.362861 kernel: cni0: port 1(veth30060a8c) entered forwarding state Jan 13 22:01:07.362881 kernel: cni0: port 1(veth30060a8c) entered disabled state Jan 13 22:01:07.378893 kernel: cni0: port 1(veth30060a8c) entered blocking state Jan 13 22:01:07.378982 kernel: cni0: port 1(veth30060a8c) entered forwarding state Jan 13 22:01:07.379167 systemd-networkd[1375]: veth30060a8c: Gained carrier Jan 13 22:01:07.380052 systemd-networkd[1375]: cni0: Gained carrier Jan 13 22:01:07.381395 containerd[1461]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 13 22:01:07.381395 containerd[1461]: delegateAdd: netconf sent to delegate plugin: Jan 13 22:01:07.400236 containerd[1461]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T22:01:07.399795854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:01:07.400879 containerd[1461]: time="2025-01-13T22:01:07.400658204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:01:07.400879 containerd[1461]: time="2025-01-13T22:01:07.400722191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:01:07.400879 containerd[1461]: time="2025-01-13T22:01:07.400820245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:01:07.429824 systemd[1]: Started cri-containerd-39959583dc86368e06b3fb6571d7d07c45b3872bb50f19b2ff973610c4db241e.scope - libcontainer container 39959583dc86368e06b3fb6571d7d07c45b3872bb50f19b2ff973610c4db241e. Jan 13 22:01:07.465496 containerd[1461]: time="2025-01-13T22:01:07.465432494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjrjb,Uid:b40e5874-5773-4bf7-843e-ddf6caa00b08,Namespace:kube-system,Attempt:0,} returns sandbox id \"39959583dc86368e06b3fb6571d7d07c45b3872bb50f19b2ff973610c4db241e\"" Jan 13 22:01:07.469510 containerd[1461]: time="2025-01-13T22:01:07.469372036Z" level=info msg="CreateContainer within sandbox \"39959583dc86368e06b3fb6571d7d07c45b3872bb50f19b2ff973610c4db241e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 22:01:07.505322 containerd[1461]: time="2025-01-13T22:01:07.505260615Z" level=info msg="CreateContainer within sandbox \"39959583dc86368e06b3fb6571d7d07c45b3872bb50f19b2ff973610c4db241e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79434d486f9529157e5edbdd9316d8a92506d72b46380c8241300d1664e5a15f\"" Jan 13 22:01:07.505974 containerd[1461]: time="2025-01-13T22:01:07.505901616Z" level=info msg="StartContainer for \"79434d486f9529157e5edbdd9316d8a92506d72b46380c8241300d1664e5a15f\"" Jan 13 22:01:07.534815 systemd[1]: Started cri-containerd-79434d486f9529157e5edbdd9316d8a92506d72b46380c8241300d1664e5a15f.scope - libcontainer container 79434d486f9529157e5edbdd9316d8a92506d72b46380c8241300d1664e5a15f. Jan 13 22:01:07.564469 containerd[1461]: time="2025-01-13T22:01:07.564203742Z" level=info msg="StartContainer for \"79434d486f9529157e5edbdd9316d8a92506d72b46380c8241300d1664e5a15f\" returns successfully" Jan 13 22:01:08.269245 containerd[1461]: time="2025-01-13T22:01:08.269075565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n59dq,Uid:cae0f72e-ee32-4614-be10-c8fd88414007,Namespace:kube-system,Attempt:0,}" Jan 13 22:01:08.297871 systemd[1]: run-containerd-runc-k8s.io-39959583dc86368e06b3fb6571d7d07c45b3872bb50f19b2ff973610c4db241e-runc.HXSYdP.mount: Deactivated successfully. Jan 13 22:01:08.322727 systemd-networkd[1375]: veth7cd4977f: Link UP Jan 13 22:01:08.330531 kernel: cni0: port 2(veth7cd4977f) entered blocking state Jan 13 22:01:08.330633 kernel: cni0: port 2(veth7cd4977f) entered disabled state Jan 13 22:01:08.330742 kernel: veth7cd4977f: entered allmulticast mode Jan 13 22:01:08.343100 kernel: veth7cd4977f: entered promiscuous mode Jan 13 22:01:08.343226 kernel: cni0: port 2(veth7cd4977f) entered blocking state Jan 13 22:01:08.343263 kernel: cni0: port 2(veth7cd4977f) entered forwarding state Jan 13 22:01:08.343300 kernel: cni0: port 2(veth7cd4977f) entered disabled state Jan 13 22:01:08.356391 kernel: cni0: port 2(veth7cd4977f) entered blocking state Jan 13 22:01:08.356546 kernel: cni0: port 2(veth7cd4977f) entered forwarding state Jan 13 22:01:08.356503 systemd-networkd[1375]: veth7cd4977f: Gained carrier Jan 13 22:01:08.360319 containerd[1461]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} Jan 13 22:01:08.360319 containerd[1461]: delegateAdd: netconf sent to delegate plugin: Jan 13 22:01:08.385305 containerd[1461]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T22:01:08.385088589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:01:08.385305 containerd[1461]: time="2025-01-13T22:01:08.385142997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:01:08.385305 containerd[1461]: time="2025-01-13T22:01:08.385175181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:01:08.385305 containerd[1461]: time="2025-01-13T22:01:08.385257844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:01:08.404914 systemd[1]: run-containerd-runc-k8s.io-960181a9b6e7a35222a0dfa93e5142e63ac72db49991a0c73e2c2f8f2c5e10c0-runc.kswpdO.mount: Deactivated successfully. Jan 13 22:01:08.411822 systemd[1]: Started cri-containerd-960181a9b6e7a35222a0dfa93e5142e63ac72db49991a0c73e2c2f8f2c5e10c0.scope - libcontainer container 960181a9b6e7a35222a0dfa93e5142e63ac72db49991a0c73e2c2f8f2c5e10c0. Jan 13 22:01:08.472408 kubelet[2614]: I0113 22:01:08.470644 2614 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zjrjb" podStartSLOduration=19.470597806 podStartE2EDuration="19.470597806s" podCreationTimestamp="2025-01-13 22:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:01:08.441425117 +0000 UTC m=+33.355812186" watchObservedRunningTime="2025-01-13 22:01:08.470597806 +0000 UTC m=+33.384984865" Jan 13 22:01:08.476508 containerd[1461]: time="2025-01-13T22:01:08.476445286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n59dq,Uid:cae0f72e-ee32-4614-be10-c8fd88414007,Namespace:kube-system,Attempt:0,} returns sandbox id \"960181a9b6e7a35222a0dfa93e5142e63ac72db49991a0c73e2c2f8f2c5e10c0\"" Jan 13 22:01:08.481317 containerd[1461]: time="2025-01-13T22:01:08.481281210Z" level=info msg="CreateContainer within sandbox \"960181a9b6e7a35222a0dfa93e5142e63ac72db49991a0c73e2c2f8f2c5e10c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 22:01:08.502776 containerd[1461]: time="2025-01-13T22:01:08.502730342Z" level=info msg="CreateContainer within sandbox \"960181a9b6e7a35222a0dfa93e5142e63ac72db49991a0c73e2c2f8f2c5e10c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"761856da432670a81923e5e7e326ac5b715dd73b7b2c597159d3e853e28ff7d7\"" Jan 13 22:01:08.504172 containerd[1461]: time="2025-01-13T22:01:08.503299650Z" level=info msg="StartContainer for \"761856da432670a81923e5e7e326ac5b715dd73b7b2c597159d3e853e28ff7d7\"" Jan 13 22:01:08.528837 systemd[1]: Started cri-containerd-761856da432670a81923e5e7e326ac5b715dd73b7b2c597159d3e853e28ff7d7.scope - libcontainer container 761856da432670a81923e5e7e326ac5b715dd73b7b2c597159d3e853e28ff7d7. Jan 13 22:01:08.555395 containerd[1461]: time="2025-01-13T22:01:08.555344051Z" level=info msg="StartContainer for \"761856da432670a81923e5e7e326ac5b715dd73b7b2c597159d3e853e28ff7d7\" returns successfully" Jan 13 22:01:08.695868 systemd-networkd[1375]: veth30060a8c: Gained IPv6LL Jan 13 22:01:09.143930 systemd-networkd[1375]: cni0: Gained IPv6LL Jan 13 22:01:09.456770 kubelet[2614]: I0113 22:01:09.456548 2614 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-n59dq" podStartSLOduration=20.456460511 podStartE2EDuration="20.456460511s" podCreationTimestamp="2025-01-13 22:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:01:09.453666487 +0000 UTC m=+34.368053606" watchObservedRunningTime="2025-01-13 22:01:09.456460511 +0000 UTC m=+34.370847620" Jan 13 22:01:09.528033 systemd-networkd[1375]: veth7cd4977f: Gained IPv6LL Jan 13 22:01:58.720260 systemd[1]: Started sshd@7-172.24.4.53:22-172.24.4.1:58348.service - OpenSSH per-connection server daemon (172.24.4.1:58348). Jan 13 22:01:59.970924 sshd[3715]: Accepted publickey for core from 172.24.4.1 port 58348 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:01:59.973761 sshd[3715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:01:59.984143 systemd-logind[1439]: New session 10 of user core. Jan 13 22:01:59.995992 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 22:02:00.749092 sshd[3715]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:00.756921 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Jan 13 22:02:00.758749 systemd[1]: sshd@7-172.24.4.53:22-172.24.4.1:58348.service: Deactivated successfully. Jan 13 22:02:00.762063 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 22:02:00.766143 systemd-logind[1439]: Removed session 10. Jan 13 22:02:05.771248 systemd[1]: Started sshd@8-172.24.4.53:22-172.24.4.1:41382.service - OpenSSH per-connection server daemon (172.24.4.1:41382). Jan 13 22:02:06.930078 sshd[3750]: Accepted publickey for core from 172.24.4.1 port 41382 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:06.933285 sshd[3750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:06.943458 systemd-logind[1439]: New session 11 of user core. Jan 13 22:02:06.949439 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 22:02:07.628418 sshd[3750]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:07.634227 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Jan 13 22:02:07.634603 systemd[1]: sshd@8-172.24.4.53:22-172.24.4.1:41382.service: Deactivated successfully. Jan 13 22:02:07.638429 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 22:02:07.642461 systemd-logind[1439]: Removed session 11. Jan 13 22:02:12.652234 systemd[1]: Started sshd@9-172.24.4.53:22-172.24.4.1:41396.service - OpenSSH per-connection server daemon (172.24.4.1:41396). Jan 13 22:02:13.772960 sshd[3784]: Accepted publickey for core from 172.24.4.1 port 41396 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:13.776302 sshd[3784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:13.788192 systemd-logind[1439]: New session 12 of user core. Jan 13 22:02:13.798137 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 22:02:14.361773 sshd[3784]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:14.367611 systemd[1]: sshd@9-172.24.4.53:22-172.24.4.1:41396.service: Deactivated successfully. Jan 13 22:02:14.369253 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 22:02:14.371590 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Jan 13 22:02:14.378029 systemd[1]: Started sshd@10-172.24.4.53:22-172.24.4.1:60320.service - OpenSSH per-connection server daemon (172.24.4.1:60320). Jan 13 22:02:14.380973 systemd-logind[1439]: Removed session 12. Jan 13 22:02:15.591461 sshd[3819]: Accepted publickey for core from 172.24.4.1 port 60320 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:15.594514 sshd[3819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:15.604653 systemd-logind[1439]: New session 13 of user core. Jan 13 22:02:15.616973 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 22:02:16.478064 sshd[3819]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:16.492039 systemd[1]: sshd@10-172.24.4.53:22-172.24.4.1:60320.service: Deactivated successfully. Jan 13 22:02:16.496472 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 22:02:16.501190 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Jan 13 22:02:16.511048 systemd[1]: Started sshd@11-172.24.4.53:22-172.24.4.1:60322.service - OpenSSH per-connection server daemon (172.24.4.1:60322). Jan 13 22:02:16.518553 systemd-logind[1439]: Removed session 13. Jan 13 22:02:17.764081 sshd[3830]: Accepted publickey for core from 172.24.4.1 port 60322 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:17.767878 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:17.779447 systemd-logind[1439]: New session 14 of user core. Jan 13 22:02:17.784064 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 22:02:18.383035 sshd[3830]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:18.389846 systemd[1]: sshd@11-172.24.4.53:22-172.24.4.1:60322.service: Deactivated successfully. Jan 13 22:02:18.397022 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 22:02:18.404743 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Jan 13 22:02:18.410074 systemd-logind[1439]: Removed session 14. Jan 13 22:02:23.403324 systemd[1]: Started sshd@12-172.24.4.53:22-172.24.4.1:60324.service - OpenSSH per-connection server daemon (172.24.4.1:60324). Jan 13 22:02:24.792056 sshd[3872]: Accepted publickey for core from 172.24.4.1 port 60324 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:24.794771 sshd[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:24.805808 systemd-logind[1439]: New session 15 of user core. Jan 13 22:02:24.815987 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 22:02:25.384136 sshd[3872]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:25.390992 systemd[1]: sshd@12-172.24.4.53:22-172.24.4.1:60324.service: Deactivated successfully. Jan 13 22:02:25.392526 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 22:02:25.394110 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Jan 13 22:02:25.402337 systemd[1]: Started sshd@13-172.24.4.53:22-172.24.4.1:52752.service - OpenSSH per-connection server daemon (172.24.4.1:52752). Jan 13 22:02:25.406814 systemd-logind[1439]: Removed session 15. Jan 13 22:02:26.649725 sshd[3900]: Accepted publickey for core from 172.24.4.1 port 52752 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:26.652601 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:26.661736 systemd-logind[1439]: New session 16 of user core. Jan 13 22:02:26.676037 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 22:02:27.416356 sshd[3900]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:27.428215 systemd[1]: sshd@13-172.24.4.53:22-172.24.4.1:52752.service: Deactivated successfully. Jan 13 22:02:27.432276 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 22:02:27.437437 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Jan 13 22:02:27.443339 systemd[1]: Started sshd@14-172.24.4.53:22-172.24.4.1:52760.service - OpenSSH per-connection server daemon (172.24.4.1:52760). Jan 13 22:02:27.448803 systemd-logind[1439]: Removed session 16. Jan 13 22:02:28.680258 sshd[3911]: Accepted publickey for core from 172.24.4.1 port 52760 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:28.682530 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:28.690300 systemd-logind[1439]: New session 17 of user core. Jan 13 22:02:28.697964 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 22:02:31.025912 sshd[3911]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:31.039576 systemd[1]: sshd@14-172.24.4.53:22-172.24.4.1:52760.service: Deactivated successfully. Jan 13 22:02:31.043544 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 22:02:31.047215 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Jan 13 22:02:31.055311 systemd[1]: Started sshd@15-172.24.4.53:22-172.24.4.1:52770.service - OpenSSH per-connection server daemon (172.24.4.1:52770). Jan 13 22:02:31.059646 systemd-logind[1439]: Removed session 17. Jan 13 22:02:32.545453 sshd[3950]: Accepted publickey for core from 172.24.4.1 port 52770 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:32.548197 sshd[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:32.558877 systemd-logind[1439]: New session 18 of user core. Jan 13 22:02:32.563050 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 22:02:33.662031 sshd[3950]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:33.674345 systemd[1]: sshd@15-172.24.4.53:22-172.24.4.1:52770.service: Deactivated successfully. Jan 13 22:02:33.679387 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 22:02:33.685325 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Jan 13 22:02:33.695350 systemd[1]: Started sshd@16-172.24.4.53:22-172.24.4.1:41404.service - OpenSSH per-connection server daemon (172.24.4.1:41404). Jan 13 22:02:33.700012 systemd-logind[1439]: Removed session 18. Jan 13 22:02:35.103320 sshd[3982]: Accepted publickey for core from 172.24.4.1 port 41404 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:35.105893 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:35.117608 systemd-logind[1439]: New session 19 of user core. Jan 13 22:02:35.122978 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 22:02:35.763070 sshd[3982]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:35.770064 systemd[1]: sshd@16-172.24.4.53:22-172.24.4.1:41404.service: Deactivated successfully. Jan 13 22:02:35.776399 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 22:02:35.783522 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Jan 13 22:02:35.788732 systemd-logind[1439]: Removed session 19. Jan 13 22:02:40.786437 systemd[1]: Started sshd@17-172.24.4.53:22-172.24.4.1:41416.service - OpenSSH per-connection server daemon (172.24.4.1:41416). Jan 13 22:02:42.354999 sshd[4020]: Accepted publickey for core from 172.24.4.1 port 41416 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:42.357891 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:42.367343 systemd-logind[1439]: New session 20 of user core. Jan 13 22:02:42.375089 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 22:02:43.135178 sshd[4020]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:43.141139 systemd[1]: sshd@17-172.24.4.53:22-172.24.4.1:41416.service: Deactivated successfully. Jan 13 22:02:43.144804 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 22:02:43.149457 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Jan 13 22:02:43.151798 systemd-logind[1439]: Removed session 20. Jan 13 22:02:48.157339 systemd[1]: Started sshd@18-172.24.4.53:22-172.24.4.1:35812.service - OpenSSH per-connection server daemon (172.24.4.1:35812). Jan 13 22:02:49.223750 sshd[4060]: Accepted publickey for core from 172.24.4.1 port 35812 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:49.225059 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:49.234525 systemd-logind[1439]: New session 21 of user core. Jan 13 22:02:49.246067 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 22:02:50.106098 sshd[4060]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:50.114567 systemd[1]: sshd@18-172.24.4.53:22-172.24.4.1:35812.service: Deactivated successfully. Jan 13 22:02:50.121480 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 22:02:50.125000 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Jan 13 22:02:50.127221 systemd-logind[1439]: Removed session 21. Jan 13 22:02:55.127227 systemd[1]: Started sshd@19-172.24.4.53:22-172.24.4.1:45822.service - OpenSSH per-connection server daemon (172.24.4.1:45822). Jan 13 22:02:56.702942 sshd[4110]: Accepted publickey for core from 172.24.4.1 port 45822 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 22:02:56.705412 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:02:56.715540 systemd-logind[1439]: New session 22 of user core. Jan 13 22:02:56.720972 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 22:02:57.496158 sshd[4110]: pam_unix(sshd:session): session closed for user core Jan 13 22:02:57.503228 systemd[1]: sshd@19-172.24.4.53:22-172.24.4.1:45822.service: Deactivated successfully. Jan 13 22:02:57.506486 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 22:02:57.509259 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. Jan 13 22:02:57.512105 systemd-logind[1439]: Removed session 22.