Jan 30 13:44:20.864416 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:44:20.864435 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:44:20.864446 kernel: BIOS-provided physical RAM map: Jan 30 13:44:20.864452 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:44:20.864458 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 13:44:20.864464 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 13:44:20.864471 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 13:44:20.864477 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 13:44:20.864483 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 30 13:44:20.864490 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 30 13:44:20.864498 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 30 13:44:20.864510 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 30 13:44:20.864516 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 30 13:44:20.864523 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 30 13:44:20.864530 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 30 13:44:20.864537 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 13:44:20.864546 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 30 13:44:20.864553 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 30 13:44:20.864559 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 13:44:20.864565 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:44:20.864572 kernel: NX (Execute Disable) protection: active Jan 30 13:44:20.864578 kernel: APIC: Static calls initialized Jan 30 13:44:20.864585 kernel: efi: EFI v2.7 by EDK II Jan 30 13:44:20.864592 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 30 13:44:20.864598 kernel: SMBIOS 2.8 present. Jan 30 13:44:20.864605 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 30 13:44:20.864611 kernel: Hypervisor detected: KVM Jan 30 13:44:20.864620 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:44:20.864626 kernel: kvm-clock: using sched offset of 3868073112 cycles Jan 30 13:44:20.864633 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:44:20.864640 kernel: tsc: Detected 2794.750 MHz processor Jan 30 13:44:20.864647 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:44:20.864654 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:44:20.864661 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 30 13:44:20.864685 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:44:20.864692 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:44:20.864701 kernel: Using GB pages for direct mapping Jan 30 13:44:20.864708 kernel: Secure boot disabled Jan 30 13:44:20.864714 kernel: ACPI: Early table checksum verification disabled Jan 30 13:44:20.864721 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 30 13:44:20.864732 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:44:20.864739 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:44:20.864746 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:44:20.864756 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 30 13:44:20.864763 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:44:20.864770 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:44:20.864777 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:44:20.864784 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:44:20.864792 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:44:20.864799 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 30 13:44:20.864808 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 30 13:44:20.864815 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 30 13:44:20.864822 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 30 13:44:20.864829 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 30 13:44:20.864836 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 30 13:44:20.864843 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 30 13:44:20.864850 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 30 13:44:20.864856 kernel: No NUMA configuration found Jan 30 13:44:20.864863 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 30 13:44:20.864872 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 30 13:44:20.864880 kernel: Zone ranges: Jan 30 13:44:20.864887 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:44:20.864894 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 30 13:44:20.864900 kernel: Normal empty Jan 30 13:44:20.864907 kernel: Movable zone start for each node Jan 30 13:44:20.864914 kernel: Early memory node ranges Jan 30 13:44:20.864921 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:44:20.864928 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 30 13:44:20.864935 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 30 13:44:20.864944 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 30 13:44:20.864951 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 30 13:44:20.864958 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 30 13:44:20.864965 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 30 13:44:20.864972 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:44:20.864979 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:44:20.864986 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 30 13:44:20.864993 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:44:20.865000 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 30 13:44:20.865009 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 30 13:44:20.865016 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 30 13:44:20.865023 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:44:20.865031 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:44:20.865038 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:44:20.865045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:44:20.865052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:44:20.865059 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:44:20.865066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:44:20.865075 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:44:20.865082 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:44:20.865089 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:44:20.865096 kernel: TSC deadline timer available Jan 30 13:44:20.865103 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:44:20.865110 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:44:20.865117 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:44:20.865124 kernel: kvm-guest: setup PV sched yield Jan 30 13:44:20.865131 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:44:20.865138 kernel: Booting paravirtualized kernel on KVM Jan 30 13:44:20.865147 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:44:20.865154 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:44:20.865162 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:44:20.865169 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:44:20.865176 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:44:20.865182 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:44:20.865190 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:44:20.865198 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:44:20.865208 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:44:20.865215 kernel: random: crng init done Jan 30 13:44:20.865222 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:44:20.865229 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:44:20.865236 kernel: Fallback order for Node 0: 0 Jan 30 13:44:20.865243 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 30 13:44:20.865250 kernel: Policy zone: DMA32 Jan 30 13:44:20.865257 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:44:20.865264 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Jan 30 13:44:20.865274 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:44:20.865281 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:44:20.865288 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:44:20.865295 kernel: Dynamic Preempt: voluntary Jan 30 13:44:20.865309 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:44:20.865319 kernel: rcu: RCU event tracing is enabled. Jan 30 13:44:20.865326 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:44:20.865334 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:44:20.865341 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:44:20.865349 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:44:20.865356 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:44:20.865363 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:44:20.865373 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:44:20.865380 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:44:20.865388 kernel: Console: colour dummy device 80x25 Jan 30 13:44:20.865395 kernel: printk: console [ttyS0] enabled Jan 30 13:44:20.865402 kernel: ACPI: Core revision 20230628 Jan 30 13:44:20.865412 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:44:20.865419 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:44:20.865426 kernel: x2apic enabled Jan 30 13:44:20.865434 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:44:20.865441 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:44:20.865449 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:44:20.865456 kernel: kvm-guest: setup PV IPIs Jan 30 13:44:20.865463 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:44:20.865471 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:44:20.865480 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 30 13:44:20.865488 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:44:20.865495 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:44:20.865508 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:44:20.865516 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:44:20.865523 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:44:20.865530 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:44:20.865538 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:44:20.865545 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:44:20.865555 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:44:20.865562 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:44:20.865570 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:44:20.865577 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:44:20.865585 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:44:20.865593 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:44:20.865600 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:44:20.865607 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:44:20.865617 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:44:20.865624 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:44:20.865632 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:44:20.865640 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:44:20.865647 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:44:20.865654 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:44:20.865662 kernel: landlock: Up and running. Jan 30 13:44:20.865679 kernel: SELinux: Initializing. Jan 30 13:44:20.865687 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:44:20.865697 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:44:20.865705 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:44:20.865712 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:44:20.865720 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:44:20.865727 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:44:20.865735 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:44:20.865742 kernel: ... version: 0 Jan 30 13:44:20.865750 kernel: ... bit width: 48 Jan 30 13:44:20.865757 kernel: ... generic registers: 6 Jan 30 13:44:20.865767 kernel: ... value mask: 0000ffffffffffff Jan 30 13:44:20.865774 kernel: ... max period: 00007fffffffffff Jan 30 13:44:20.865781 kernel: ... fixed-purpose events: 0 Jan 30 13:44:20.865789 kernel: ... event mask: 000000000000003f Jan 30 13:44:20.865796 kernel: signal: max sigframe size: 1776 Jan 30 13:44:20.865803 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:44:20.865811 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:44:20.865818 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:44:20.865826 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:44:20.865835 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:44:20.865842 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:44:20.865850 kernel: smpboot: Max logical packages: 1 Jan 30 13:44:20.865857 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 30 13:44:20.865864 kernel: devtmpfs: initialized Jan 30 13:44:20.865872 kernel: x86/mm: Memory block size: 128MB Jan 30 13:44:20.865879 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 30 13:44:20.865887 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 30 13:44:20.865894 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 30 13:44:20.865904 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 30 13:44:20.865911 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 30 13:44:20.865919 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:44:20.865926 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:44:20.865934 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:44:20.865941 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:44:20.865949 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:44:20.865964 kernel: audit: type=2000 audit(1738244660.140:1): state=initialized audit_enabled=0 res=1 Jan 30 13:44:20.865972 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:44:20.865982 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:44:20.865989 kernel: cpuidle: using governor menu Jan 30 13:44:20.866003 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:44:20.866017 kernel: dca service started, version 1.12.1 Jan 30 13:44:20.866037 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:44:20.866046 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:44:20.866059 kernel: PCI: Using configuration type 1 for base access Jan 30 13:44:20.866074 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:44:20.866081 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:44:20.866091 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:44:20.866098 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:44:20.866106 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:44:20.866113 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:44:20.866120 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:44:20.866128 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:44:20.866135 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:44:20.866143 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:44:20.866150 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:44:20.866160 kernel: ACPI: Interpreter enabled Jan 30 13:44:20.866167 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:44:20.866174 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:44:20.866182 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:44:20.866189 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:44:20.866197 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:44:20.866204 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:44:20.866383 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:44:20.866523 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:44:20.866645 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:44:20.866655 kernel: PCI host bridge to bus 0000:00 Jan 30 13:44:20.866792 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:44:20.866904 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:44:20.867014 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:44:20.867122 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:44:20.867309 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:44:20.867441 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 30 13:44:20.867594 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:44:20.867753 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:44:20.867891 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:44:20.868015 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 30 13:44:20.868139 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 30 13:44:20.868258 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 30 13:44:20.868376 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 30 13:44:20.868496 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:44:20.868635 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:44:20.868828 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 30 13:44:20.868957 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 30 13:44:20.869082 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 30 13:44:20.869210 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:44:20.869330 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 30 13:44:20.869450 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 30 13:44:20.869809 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 30 13:44:20.869988 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:44:20.870114 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 30 13:44:20.870253 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 30 13:44:20.870378 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 30 13:44:20.870529 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 30 13:44:20.870684 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:44:20.870811 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:44:20.870940 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:44:20.871061 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 30 13:44:20.871188 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 30 13:44:20.871318 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:44:20.871439 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 30 13:44:20.871449 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:44:20.871457 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:44:20.871465 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:44:20.871473 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:44:20.871485 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:44:20.871493 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:44:20.871500 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:44:20.871515 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:44:20.871523 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:44:20.871531 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:44:20.871539 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:44:20.871547 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:44:20.871555 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:44:20.871565 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:44:20.871573 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:44:20.871580 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:44:20.871588 kernel: iommu: Default domain type: Translated Jan 30 13:44:20.871596 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:44:20.871603 kernel: efivars: Registered efivars operations Jan 30 13:44:20.871611 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:44:20.871619 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:44:20.871627 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 30 13:44:20.871638 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 30 13:44:20.871645 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 30 13:44:20.871653 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 30 13:44:20.871791 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:44:20.871914 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:44:20.872034 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:44:20.872045 kernel: vgaarb: loaded Jan 30 13:44:20.872052 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:44:20.872060 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:44:20.872072 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:44:20.872080 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:44:20.872088 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:44:20.872096 kernel: pnp: PnP ACPI init Jan 30 13:44:20.872234 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:44:20.872246 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:44:20.872254 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:44:20.872262 kernel: NET: Registered PF_INET protocol family Jan 30 13:44:20.872273 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:44:20.872281 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:44:20.872288 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:44:20.872296 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:44:20.872304 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:44:20.872312 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:44:20.872319 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:44:20.872327 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:44:20.872335 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:44:20.872345 kernel: NET: Registered PF_XDP protocol family Jan 30 13:44:20.872468 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 30 13:44:20.872600 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 30 13:44:20.872809 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:44:20.872925 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:44:20.873036 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:44:20.873147 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:44:20.873257 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:44:20.873372 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 30 13:44:20.873382 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:44:20.873390 kernel: Initialise system trusted keyrings Jan 30 13:44:20.873398 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:44:20.873406 kernel: Key type asymmetric registered Jan 30 13:44:20.873414 kernel: Asymmetric key parser 'x509' registered Jan 30 13:44:20.873421 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:44:20.873429 kernel: io scheduler mq-deadline registered Jan 30 13:44:20.873441 kernel: io scheduler kyber registered Jan 30 13:44:20.873451 kernel: io scheduler bfq registered Jan 30 13:44:20.873461 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:44:20.873473 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:44:20.873484 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:44:20.873495 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:44:20.873518 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:44:20.873529 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:44:20.873539 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:44:20.873550 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:44:20.873565 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:44:20.873729 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:44:20.873860 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:44:20.873872 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:44:20.873984 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:44:20 UTC (1738244660) Jan 30 13:44:20.874098 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:44:20.874108 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:44:20.874121 kernel: efifb: probing for efifb Jan 30 13:44:20.874128 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 30 13:44:20.874136 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 30 13:44:20.874144 kernel: efifb: scrolling: redraw Jan 30 13:44:20.874152 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 30 13:44:20.874160 kernel: Console: switching to colour frame buffer device 100x37 Jan 30 13:44:20.874188 kernel: fb0: EFI VGA frame buffer device Jan 30 13:44:20.874199 kernel: pstore: Using crash dump compression: deflate Jan 30 13:44:20.874207 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:44:20.874218 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:44:20.874226 kernel: Segment Routing with IPv6 Jan 30 13:44:20.874234 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:44:20.874242 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:44:20.874250 kernel: Key type dns_resolver registered Jan 30 13:44:20.874258 kernel: IPI shorthand broadcast: enabled Jan 30 13:44:20.874266 kernel: sched_clock: Marking stable (560002364, 114099276)->(716938259, -42836619) Jan 30 13:44:20.874274 kernel: registered taskstats version 1 Jan 30 13:44:20.874282 kernel: Loading compiled-in X.509 certificates Jan 30 13:44:20.874291 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:44:20.874301 kernel: Key type .fscrypt registered Jan 30 13:44:20.874309 kernel: Key type fscrypt-provisioning registered Jan 30 13:44:20.874317 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:44:20.874325 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:44:20.874333 kernel: ima: No architecture policies found Jan 30 13:44:20.874340 kernel: clk: Disabling unused clocks Jan 30 13:44:20.874348 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:44:20.874356 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:44:20.874367 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:44:20.874375 kernel: Run /init as init process Jan 30 13:44:20.874382 kernel: with arguments: Jan 30 13:44:20.874390 kernel: /init Jan 30 13:44:20.874398 kernel: with environment: Jan 30 13:44:20.874406 kernel: HOME=/ Jan 30 13:44:20.874414 kernel: TERM=linux Jan 30 13:44:20.874424 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:44:20.874436 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:44:20.874449 systemd[1]: Detected virtualization kvm. Jan 30 13:44:20.874458 systemd[1]: Detected architecture x86-64. Jan 30 13:44:20.874466 systemd[1]: Running in initrd. Jan 30 13:44:20.874477 systemd[1]: No hostname configured, using default hostname. Jan 30 13:44:20.874488 systemd[1]: Hostname set to . Jan 30 13:44:20.874497 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:44:20.874514 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:44:20.874523 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:44:20.874532 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:44:20.874541 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:44:20.874550 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:44:20.874559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:44:20.874570 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:44:20.874581 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:44:20.874589 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:44:20.874598 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:44:20.874606 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:44:20.874615 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:44:20.874623 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:44:20.874634 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:44:20.874642 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:44:20.874651 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:44:20.874659 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:44:20.874743 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:44:20.874751 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:44:20.874760 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:44:20.874768 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:44:20.874780 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:44:20.874788 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:44:20.874797 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:44:20.874805 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:44:20.874813 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:44:20.874822 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:44:20.874830 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:44:20.874839 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:44:20.874847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:20.874858 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:44:20.874866 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:44:20.874875 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:44:20.874908 systemd-journald[193]: Collecting audit messages is disabled. Jan 30 13:44:20.874932 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:44:20.874941 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:20.874950 systemd-journald[193]: Journal started Jan 30 13:44:20.874972 systemd-journald[193]: Runtime Journal (/run/log/journal/d38dac673f2c4b429b2a8d82bc001334) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:44:20.867625 systemd-modules-load[194]: Inserted module 'overlay' Jan 30 13:44:20.876711 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:44:20.880579 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:44:20.884822 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:44:20.886233 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:44:20.890168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:44:20.898734 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:44:20.899328 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 30 13:44:20.900820 kernel: Bridge firewalling registered Jan 30 13:44:20.900463 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:44:20.902790 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:44:20.903420 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:44:20.904265 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:44:20.907435 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:44:20.911728 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:44:20.918810 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:44:20.927794 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:44:20.930021 dracut-cmdline[226]: dracut-dracut-053 Jan 30 13:44:20.931124 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:44:20.961160 systemd-resolved[231]: Positive Trust Anchors: Jan 30 13:44:20.961175 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:44:20.961207 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:44:20.963698 systemd-resolved[231]: Defaulting to hostname 'linux'. Jan 30 13:44:20.964716 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:44:20.970913 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:44:21.013700 kernel: SCSI subsystem initialized Jan 30 13:44:21.022692 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:44:21.033701 kernel: iscsi: registered transport (tcp) Jan 30 13:44:21.053804 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:44:21.053836 kernel: QLogic iSCSI HBA Driver Jan 30 13:44:21.105367 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:44:21.114784 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:44:21.139767 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:44:21.139826 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:44:21.140795 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:44:21.181716 kernel: raid6: avx2x4 gen() 30500 MB/s Jan 30 13:44:21.198695 kernel: raid6: avx2x2 gen() 31059 MB/s Jan 30 13:44:21.215760 kernel: raid6: avx2x1 gen() 26138 MB/s Jan 30 13:44:21.215780 kernel: raid6: using algorithm avx2x2 gen() 31059 MB/s Jan 30 13:44:21.233776 kernel: raid6: .... xor() 19998 MB/s, rmw enabled Jan 30 13:44:21.233807 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:44:21.253693 kernel: xor: automatically using best checksumming function avx Jan 30 13:44:21.403709 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:44:21.417852 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:44:21.430818 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:44:21.443804 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 30 13:44:21.448388 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:44:21.451435 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:44:21.469946 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jan 30 13:44:21.502953 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:44:21.520855 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:44:21.581095 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:44:21.591861 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:44:21.604781 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:44:21.607576 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:44:21.610328 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:44:21.612769 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:44:21.617822 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:44:21.651096 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:44:21.651114 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:44:21.651124 kernel: AES CTR mode by8 optimization enabled Jan 30 13:44:21.651135 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:44:21.651287 kernel: libata version 3.00 loaded. Jan 30 13:44:21.651299 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:44:21.651309 kernel: GPT:9289727 != 19775487 Jan 30 13:44:21.651319 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:44:21.651329 kernel: GPT:9289727 != 19775487 Jan 30 13:44:21.651338 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:44:21.651348 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:44:21.623970 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:44:21.639702 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:44:21.651414 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:44:21.651548 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:44:21.653223 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:44:21.659582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:44:21.659763 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:21.660236 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:21.668001 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:44:21.701430 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:44:21.701446 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:44:21.701612 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:44:21.701788 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Jan 30 13:44:21.701799 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (458) Jan 30 13:44:21.701810 kernel: scsi host0: ahci Jan 30 13:44:21.701964 kernel: scsi host1: ahci Jan 30 13:44:21.702113 kernel: scsi host2: ahci Jan 30 13:44:21.702271 kernel: scsi host3: ahci Jan 30 13:44:21.702451 kernel: scsi host4: ahci Jan 30 13:44:21.702608 kernel: scsi host5: ahci Jan 30 13:44:21.702818 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 30 13:44:21.702830 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 30 13:44:21.702840 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 30 13:44:21.702851 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 30 13:44:21.702861 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 30 13:44:21.702875 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 30 13:44:21.671924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:21.693859 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:44:21.711875 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:44:21.717220 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:44:21.723157 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:44:21.724792 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:44:21.737790 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:44:21.739186 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:44:21.739237 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:21.748544 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:44:21.742243 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:21.750328 disk-uuid[567]: Primary Header is updated. Jan 30 13:44:21.750328 disk-uuid[567]: Secondary Entries is updated. Jan 30 13:44:21.750328 disk-uuid[567]: Secondary Header is updated. Jan 30 13:44:21.754724 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:44:21.743116 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:21.761703 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:21.776806 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:44:21.795619 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:44:22.011487 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:44:22.011537 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:44:22.011548 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:44:22.011565 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:44:22.012690 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:44:22.013698 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:44:22.014696 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:44:22.014711 kernel: ata3.00: applying bridge limits Jan 30 13:44:22.015690 kernel: ata3.00: configured for UDMA/100 Jan 30 13:44:22.017697 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:44:22.057229 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:44:22.072325 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:44:22.072341 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:44:22.753696 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:44:22.753832 disk-uuid[569]: The operation has completed successfully. Jan 30 13:44:22.777849 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:44:22.777973 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:44:22.806821 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:44:22.810248 sh[598]: Success Jan 30 13:44:22.822688 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:44:22.855253 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:44:22.868211 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:44:22.873081 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:44:22.882255 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:44:22.882302 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:44:22.882315 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:44:22.883415 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:44:22.884840 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:44:22.889165 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:44:22.890082 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:44:22.899849 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:44:22.901994 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:44:22.911234 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:44:22.911271 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:44:22.911286 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:44:22.914709 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:44:22.924018 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:44:22.925688 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:44:22.936661 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:44:22.941861 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:44:22.996614 ignition[684]: Ignition 2.19.0 Jan 30 13:44:22.996626 ignition[684]: Stage: fetch-offline Jan 30 13:44:22.996689 ignition[684]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:44:22.996701 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:44:22.996791 ignition[684]: parsed url from cmdline: "" Jan 30 13:44:22.996795 ignition[684]: no config URL provided Jan 30 13:44:22.996801 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:44:22.996810 ignition[684]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:44:22.996838 ignition[684]: op(1): [started] loading QEMU firmware config module Jan 30 13:44:22.996844 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:44:23.003060 ignition[684]: op(1): [finished] loading QEMU firmware config module Jan 30 13:44:23.018302 ignition[684]: parsing config with SHA512: 75b329a6b866066c82947b8476a9bf5ce12639e699bea1e7091d39550cf3556bc27a512f71ce2803ebdaefabc620f00961aeff36bf6f22f4446bf6f3c8a3dc7c Jan 30 13:44:23.021821 unknown[684]: fetched base config from "system" Jan 30 13:44:23.022120 unknown[684]: fetched user config from "qemu" Jan 30 13:44:23.022465 ignition[684]: fetch-offline: fetch-offline passed Jan 30 13:44:23.022525 ignition[684]: Ignition finished successfully Jan 30 13:44:23.024811 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:44:23.034871 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:44:23.046844 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:44:23.067561 systemd-networkd[787]: lo: Link UP Jan 30 13:44:23.067570 systemd-networkd[787]: lo: Gained carrier Jan 30 13:44:23.069146 systemd-networkd[787]: Enumeration completed Jan 30 13:44:23.069241 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:44:23.069530 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:44:23.069534 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:44:23.070423 systemd-networkd[787]: eth0: Link UP Jan 30 13:44:23.070427 systemd-networkd[787]: eth0: Gained carrier Jan 30 13:44:23.070433 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:44:23.072533 systemd[1]: Reached target network.target - Network. Jan 30 13:44:23.075663 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:44:23.081804 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:44:23.084726 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:44:23.093909 ignition[790]: Ignition 2.19.0 Jan 30 13:44:23.093923 ignition[790]: Stage: kargs Jan 30 13:44:23.094085 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:44:23.094097 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:44:23.094901 ignition[790]: kargs: kargs passed Jan 30 13:44:23.094940 ignition[790]: Ignition finished successfully Jan 30 13:44:23.101597 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:44:23.109879 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:44:23.122596 ignition[799]: Ignition 2.19.0 Jan 30 13:44:23.122606 ignition[799]: Stage: disks Jan 30 13:44:23.122788 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:44:23.122799 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:44:23.123539 ignition[799]: disks: disks passed Jan 30 13:44:23.123583 ignition[799]: Ignition finished successfully Jan 30 13:44:23.128980 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:44:23.131057 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:44:23.131314 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:44:23.133406 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:44:23.133916 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:44:23.134246 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:44:23.150826 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:44:23.165364 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:44:23.171728 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:44:23.184771 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:44:23.268702 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:44:23.269089 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:44:23.270149 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:44:23.282785 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:44:23.284161 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:44:23.285138 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:44:23.285174 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:44:23.285194 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:44:23.292513 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:44:23.295555 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:44:23.303508 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (819) Jan 30 13:44:23.303541 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:44:23.303552 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:44:23.305197 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:44:23.308685 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:44:23.310350 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:44:23.335919 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:44:23.340040 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:44:23.345166 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:44:23.350062 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:44:23.433661 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:44:23.440769 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:44:23.442880 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:44:23.449692 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:44:23.466604 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:44:23.469559 ignition[932]: INFO : Ignition 2.19.0 Jan 30 13:44:23.469559 ignition[932]: INFO : Stage: mount Jan 30 13:44:23.471237 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:44:23.471237 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:44:23.474048 ignition[932]: INFO : mount: mount passed Jan 30 13:44:23.474817 ignition[932]: INFO : Ignition finished successfully Jan 30 13:44:23.477640 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:44:23.489744 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:44:23.881359 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:44:23.893794 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:44:23.899692 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (946) Jan 30 13:44:23.899720 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:44:23.902194 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:44:23.902212 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:44:23.904688 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:44:23.905835 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:44:23.934866 ignition[963]: INFO : Ignition 2.19.0 Jan 30 13:44:23.934866 ignition[963]: INFO : Stage: files Jan 30 13:44:23.936828 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:44:23.936828 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:44:23.936828 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:44:23.936828 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:44:23.936828 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:44:23.943960 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:44:23.943960 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:44:23.943960 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:44:23.943960 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:44:23.943960 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:44:23.939712 unknown[963]: wrote ssh authorized keys file for user: core Jan 30 13:44:23.982843 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:44:24.123820 systemd-networkd[787]: eth0: Gained IPv6LL Jan 30 13:44:24.138289 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:44:24.140341 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:44:24.657479 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:44:25.006807 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:44:25.006807 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:44:25.011149 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:44:25.011149 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:44:25.011149 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:44:25.011149 ignition[963]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 13:44:25.011149 ignition[963]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:44:25.011149 ignition[963]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:44:25.011149 ignition[963]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 13:44:25.011149 ignition[963]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:44:25.034390 ignition[963]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:44:25.039684 ignition[963]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:44:25.041237 ignition[963]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:44:25.041237 ignition[963]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:44:25.041237 ignition[963]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:44:25.041237 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:44:25.041237 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:44:25.041237 ignition[963]: INFO : files: files passed Jan 30 13:44:25.041237 ignition[963]: INFO : Ignition finished successfully Jan 30 13:44:25.042902 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:44:25.050869 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:44:25.053575 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:44:25.055465 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:44:25.055600 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:44:25.063445 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:44:25.066294 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:44:25.067952 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:44:25.069511 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:44:25.069174 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:44:25.071503 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:44:25.077835 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:44:25.107529 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:44:25.107717 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:44:25.110061 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:44:25.112099 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:44:25.114177 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:44:25.115209 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:44:25.135231 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:44:25.142821 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:44:25.154344 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:44:25.154694 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:44:25.157030 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:44:25.157338 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:44:25.157447 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:44:25.157874 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:44:25.158209 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:44:25.158590 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:44:25.167006 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:44:25.167601 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:44:25.170948 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:44:25.171498 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:44:25.172011 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:44:25.176442 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:44:25.176927 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:44:25.180327 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:44:25.180442 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:44:25.182427 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:44:25.182806 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:44:25.183227 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:44:25.183551 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:44:25.188621 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:44:25.188763 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:44:25.189472 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:44:25.189613 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:44:25.194108 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:44:25.196086 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:44:25.201351 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:44:25.202289 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:44:25.202562 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:44:25.203121 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:44:25.203213 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:44:25.203503 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:44:25.203586 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:44:25.209694 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:44:25.209811 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:44:25.210154 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:44:25.210253 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:44:25.226793 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:44:25.227203 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:44:25.227314 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:44:25.228309 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:44:25.231003 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:44:25.231163 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:44:25.233033 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:44:25.233170 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:44:25.239956 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:44:25.240112 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:44:25.255488 ignition[1018]: INFO : Ignition 2.19.0 Jan 30 13:44:25.255488 ignition[1018]: INFO : Stage: umount Jan 30 13:44:25.257191 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:44:25.257191 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:44:25.257191 ignition[1018]: INFO : umount: umount passed Jan 30 13:44:25.257191 ignition[1018]: INFO : Ignition finished successfully Jan 30 13:44:25.260027 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:44:25.260728 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:44:25.260890 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:44:25.263016 systemd[1]: Stopped target network.target - Network. Jan 30 13:44:25.264316 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:44:25.264377 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:44:25.266507 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:44:25.266554 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:44:25.268532 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:44:25.268580 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:44:25.271090 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:44:25.271140 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:44:25.273774 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:44:25.276228 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:44:25.281704 systemd-networkd[787]: eth0: DHCPv6 lease lost Jan 30 13:44:25.283695 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:44:25.283851 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:44:25.286452 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:44:25.286503 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:44:25.296752 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:44:25.297173 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:44:25.297224 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:44:25.297638 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:44:25.298107 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:44:25.298226 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:44:25.301035 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:44:25.301117 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:44:25.303136 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:44:25.303185 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:44:25.305379 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:44:25.305437 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:44:25.317418 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:44:25.317617 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:44:25.320000 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:44:25.320124 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:44:25.322246 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:44:25.322325 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:44:25.324127 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:44:25.324167 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:44:25.326148 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:44:25.326198 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:44:25.328482 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:44:25.328541 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:44:25.330424 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:44:25.330474 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:44:25.348795 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:44:25.350019 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:44:25.350074 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:44:25.352491 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:44:25.352539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:25.357075 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:44:25.357186 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:44:25.403698 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:44:25.403829 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:44:25.405795 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:44:25.406128 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:44:25.406173 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:44:25.419789 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:44:25.428376 systemd[1]: Switching root. Jan 30 13:44:25.464186 systemd-journald[193]: Journal stopped Jan 30 13:44:26.638750 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 30 13:44:26.638821 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:44:26.638835 kernel: SELinux: policy capability open_perms=1 Jan 30 13:44:26.638846 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:44:26.638861 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:44:26.638873 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:44:26.638884 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:44:26.638895 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:44:26.638912 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:44:26.638926 kernel: audit: type=1403 audit(1738244665.917:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:44:26.638938 systemd[1]: Successfully loaded SELinux policy in 40.088ms. Jan 30 13:44:26.638960 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.870ms. Jan 30 13:44:26.638972 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:44:26.638984 systemd[1]: Detected virtualization kvm. Jan 30 13:44:26.638996 systemd[1]: Detected architecture x86-64. Jan 30 13:44:26.639007 systemd[1]: Detected first boot. Jan 30 13:44:26.639021 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:44:26.639033 zram_generator::config[1062]: No configuration found. Jan 30 13:44:26.639056 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:44:26.639074 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:44:26.639093 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:44:26.639112 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:44:26.639133 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:44:26.639151 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:44:26.639176 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:44:26.639195 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:44:26.639225 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:44:26.639237 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:44:26.639249 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:44:26.639260 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:44:26.639272 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:44:26.639285 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:44:26.639297 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:44:26.639309 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:44:26.639321 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:44:26.639335 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:44:26.639347 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:44:26.639360 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:44:26.639380 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:44:26.639392 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:44:26.639404 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:44:26.639415 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:44:26.639430 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:44:26.639442 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:44:26.639454 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:44:26.639466 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:44:26.639477 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:44:26.639489 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:44:26.639501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:44:26.639512 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:44:26.639524 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:44:26.639536 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:44:26.639555 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:44:26.639567 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:44:26.639578 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:44:26.639590 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:26.639602 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:44:26.639613 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:44:26.639627 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:44:26.639639 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:44:26.639654 systemd[1]: Reached target machines.target - Containers. Jan 30 13:44:26.639678 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:44:26.639690 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:44:26.639702 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:44:26.639714 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:44:26.639726 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:44:26.639738 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:44:26.639752 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:44:26.639766 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:44:26.639780 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:44:26.639792 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:44:26.639804 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:44:26.639815 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:44:26.639827 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:44:26.639838 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:44:26.639850 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:44:26.639862 kernel: loop: module loaded Jan 30 13:44:26.639873 kernel: fuse: init (API version 7.39) Jan 30 13:44:26.639886 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:44:26.639899 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:44:26.639911 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:44:26.639923 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:44:26.639935 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:44:26.639946 systemd[1]: Stopped verity-setup.service. Jan 30 13:44:26.639974 systemd-journald[1139]: Collecting audit messages is disabled. Jan 30 13:44:26.639999 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:26.640011 kernel: ACPI: bus type drm_connector registered Jan 30 13:44:26.640022 systemd-journald[1139]: Journal started Jan 30 13:44:26.640043 systemd-journald[1139]: Runtime Journal (/run/log/journal/d38dac673f2c4b429b2a8d82bc001334) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:44:26.423351 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:44:26.439660 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:44:26.440227 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:44:26.644458 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:44:26.645124 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:44:26.646276 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:44:26.647459 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:44:26.648535 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:44:26.649722 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:44:26.650912 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:44:26.652136 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:44:26.653564 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:44:26.655066 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:44:26.655239 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:44:26.656933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:44:26.657104 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:44:26.658517 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:44:26.658704 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:44:26.660036 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:44:26.660203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:44:26.661815 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:44:26.661987 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:44:26.663354 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:44:26.663564 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:44:26.664942 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:44:26.666296 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:44:26.667817 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:44:26.682919 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:44:26.692772 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:44:26.695049 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:44:26.696159 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:44:26.696191 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:44:26.698151 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:44:26.700479 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:44:26.704869 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:44:26.706208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:44:26.708822 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:44:26.713458 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:44:26.714811 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:44:26.722030 systemd-journald[1139]: Time spent on flushing to /var/log/journal/d38dac673f2c4b429b2a8d82bc001334 is 14.148ms for 989 entries. Jan 30 13:44:26.722030 systemd-journald[1139]: System Journal (/var/log/journal/d38dac673f2c4b429b2a8d82bc001334) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:44:26.766910 systemd-journald[1139]: Received client request to flush runtime journal. Jan 30 13:44:26.719896 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:44:26.721096 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:44:26.723176 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:44:26.727841 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:44:26.730327 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:44:26.734438 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:44:26.735931 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:44:26.737589 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:44:26.739206 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:44:26.749124 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:44:26.751640 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:44:26.758356 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:44:26.763631 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:44:26.765272 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:44:26.772780 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:44:26.778690 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 13:44:26.780628 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:44:26.787206 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:44:26.788857 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:44:26.795875 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:44:26.803847 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:44:26.804005 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:44:26.828050 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 30 13:44:26.828661 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 30 13:44:26.829684 kernel: loop1: detected capacity change from 0 to 205544 Jan 30 13:44:26.835520 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:44:26.858700 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 13:44:26.895700 kernel: loop3: detected capacity change from 0 to 140768 Jan 30 13:44:26.907693 kernel: loop4: detected capacity change from 0 to 205544 Jan 30 13:44:26.914708 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 13:44:26.924472 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:44:26.925074 (sd-merge)[1203]: Merged extensions into '/usr'. Jan 30 13:44:26.930747 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:44:26.930763 systemd[1]: Reloading... Jan 30 13:44:26.991701 zram_generator::config[1229]: No configuration found. Jan 30 13:44:27.033178 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:44:27.111694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:44:27.160276 systemd[1]: Reloading finished in 229 ms. Jan 30 13:44:27.199605 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:44:27.201198 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:44:27.210820 systemd[1]: Starting ensure-sysext.service... Jan 30 13:44:27.212783 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:44:27.220736 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:44:27.220745 systemd[1]: Reloading... Jan 30 13:44:27.236494 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:44:27.236877 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:44:27.237877 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:44:27.238175 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 30 13:44:27.238339 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 30 13:44:27.241575 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:44:27.241587 systemd-tmpfiles[1267]: Skipping /boot Jan 30 13:44:27.252150 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:44:27.252238 systemd-tmpfiles[1267]: Skipping /boot Jan 30 13:44:27.277694 zram_generator::config[1297]: No configuration found. Jan 30 13:44:27.385698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:44:27.435373 systemd[1]: Reloading finished in 214 ms. Jan 30 13:44:27.455097 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:44:27.468162 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:44:27.477220 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:44:27.479790 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:44:27.482155 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:44:27.485494 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:44:27.489703 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:44:27.493053 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:44:27.497408 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:27.497569 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:44:27.509373 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:44:27.516642 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:44:27.520371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:44:27.522946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:44:27.526503 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:44:27.527633 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:27.528973 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:44:27.529109 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Jan 30 13:44:27.530987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:44:27.531378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:44:27.533164 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:44:27.533338 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:44:27.535340 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:44:27.535703 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:44:27.545445 augenrules[1358]: No rules Jan 30 13:44:27.548108 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:44:27.550128 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:44:27.555785 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:27.556077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:44:27.565971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:44:27.568835 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:44:27.571833 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:44:27.574821 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:44:27.576308 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:44:27.580762 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:44:27.582889 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:27.583199 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:44:27.584664 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:44:27.587257 systemd[1]: Finished ensure-sysext.service. Jan 30 13:44:27.588738 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:44:27.591163 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:44:27.591335 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:44:27.602372 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:44:27.602554 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:44:27.604207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:44:27.604413 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:44:27.620686 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1373) Jan 30 13:44:27.618596 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:44:27.621306 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:44:27.622638 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:44:27.622717 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:44:27.624823 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:44:27.625977 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:44:27.626364 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:44:27.626552 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:44:27.644999 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:44:27.680252 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:44:27.689074 systemd-resolved[1336]: Positive Trust Anchors: Jan 30 13:44:27.692947 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:44:27.689092 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:44:27.689125 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:44:27.692884 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:44:27.693329 systemd-resolved[1336]: Defaulting to hostname 'linux'. Jan 30 13:44:27.695702 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:44:27.696017 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:44:27.698780 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:44:27.706740 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 13:44:27.715031 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:44:27.730178 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 30 13:44:27.730460 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:44:27.730615 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:44:27.730840 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:44:27.739196 systemd-networkd[1405]: lo: Link UP Jan 30 13:44:27.739210 systemd-networkd[1405]: lo: Gained carrier Jan 30 13:44:27.741002 systemd-networkd[1405]: Enumeration completed Jan 30 13:44:27.741103 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:44:27.743993 systemd[1]: Reached target network.target - Network. Jan 30 13:44:27.748464 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:44:27.748477 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:44:27.749233 systemd-networkd[1405]: eth0: Link UP Jan 30 13:44:27.749237 systemd-networkd[1405]: eth0: Gained carrier Jan 30 13:44:27.749250 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:44:27.753288 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:44:27.754869 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:44:27.757718 systemd-networkd[1405]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:44:27.758465 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 30 13:44:27.762211 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:44:27.762259 systemd-timesyncd[1406]: Initial clock synchronization to Thu 2025-01-30 13:44:27.982724 UTC. Jan 30 13:44:27.769940 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:44:27.783203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:27.789531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:44:27.789808 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:27.792475 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:27.824432 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:44:27.834888 kernel: kvm_amd: TSC scaling supported Jan 30 13:44:27.834915 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:44:27.834927 kernel: kvm_amd: Nested Paging enabled Jan 30 13:44:27.834952 kernel: kvm_amd: LBR virtualization supported Jan 30 13:44:27.836163 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:44:27.836178 kernel: kvm_amd: Virtual GIF supported Jan 30 13:44:27.854689 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:44:27.865323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:27.890105 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:44:27.901881 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:44:27.910882 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:44:27.939650 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:44:27.941231 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:44:27.942397 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:44:27.943650 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:44:27.945039 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:44:27.946569 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:44:27.947867 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:44:27.949204 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:44:27.950534 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:44:27.950563 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:44:27.951538 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:44:27.953144 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:44:27.956146 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:44:27.967430 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:44:27.969960 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:44:27.971619 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:44:27.972873 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:44:27.973877 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:44:27.975038 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:44:27.975066 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:44:27.976015 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:44:27.978150 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:44:27.982948 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:44:27.986040 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:44:27.987481 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:44:27.990008 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:44:27.989912 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:44:27.990285 jq[1442]: false Jan 30 13:44:27.994760 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:44:28.002844 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:44:28.007630 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:44:28.009896 dbus-daemon[1441]: [system] SELinux support is enabled Jan 30 13:44:28.011280 extend-filesystems[1443]: Found loop3 Jan 30 13:44:28.011280 extend-filesystems[1443]: Found loop4 Jan 30 13:44:28.011280 extend-filesystems[1443]: Found loop5 Jan 30 13:44:28.011280 extend-filesystems[1443]: Found sr0 Jan 30 13:44:28.011280 extend-filesystems[1443]: Found vda Jan 30 13:44:28.011280 extend-filesystems[1443]: Found vda1 Jan 30 13:44:28.011280 extend-filesystems[1443]: Found vda2 Jan 30 13:44:28.011280 extend-filesystems[1443]: Found vda3 Jan 30 13:44:28.011280 extend-filesystems[1443]: Found usr Jan 30 13:44:28.011280 extend-filesystems[1443]: Found vda4 Jan 30 13:44:28.011280 extend-filesystems[1443]: Found vda6 Jan 30 13:44:28.011280 extend-filesystems[1443]: Found vda7 Jan 30 13:44:28.011280 extend-filesystems[1443]: Found vda9 Jan 30 13:44:28.011280 extend-filesystems[1443]: Checking size of /dev/vda9 Jan 30 13:44:28.055719 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1375) Jan 30 13:44:28.055750 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:44:28.057029 extend-filesystems[1443]: Resized partition /dev/vda9 Jan 30 13:44:28.016520 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:44:28.062090 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:44:28.080379 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:44:28.018073 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:44:28.018501 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:44:28.110857 jq[1462]: true Jan 30 13:44:28.019841 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:44:28.111150 update_engine[1460]: I20250130 13:44:28.062557 1460 main.cc:92] Flatcar Update Engine starting Jan 30 13:44:28.111150 update_engine[1460]: I20250130 13:44:28.065043 1460 update_check_scheduler.cc:74] Next update check in 6m50s Jan 30 13:44:28.022576 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:44:28.116935 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:44:28.116935 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:44:28.116935 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:44:28.025442 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:44:28.121344 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Jan 30 13:44:28.123604 tar[1466]: linux-amd64/helm Jan 30 13:44:28.037040 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:44:28.124003 jq[1467]: true Jan 30 13:44:28.040472 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:44:28.040674 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:44:28.041018 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:44:28.041210 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:44:28.045706 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:44:28.045939 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:44:28.079354 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:44:28.100797 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:44:28.102835 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:44:28.104181 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:44:28.104200 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:44:28.105723 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:44:28.105740 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:44:28.111419 systemd-logind[1456]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:44:28.111442 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:44:28.113678 systemd-logind[1456]: New seat seat0. Jan 30 13:44:28.115992 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:44:28.117613 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:44:28.119256 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:44:28.119463 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:44:28.154624 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:44:28.155057 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:44:28.156093 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:44:28.159255 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:44:28.283201 containerd[1468]: time="2025-01-30T13:44:28.283108442Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:44:28.308905 containerd[1468]: time="2025-01-30T13:44:28.308824224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:28.311735 containerd[1468]: time="2025-01-30T13:44:28.310818295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:44:28.311735 containerd[1468]: time="2025-01-30T13:44:28.310847079Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:44:28.311735 containerd[1468]: time="2025-01-30T13:44:28.310869440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:44:28.311735 containerd[1468]: time="2025-01-30T13:44:28.311037136Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:44:28.311735 containerd[1468]: time="2025-01-30T13:44:28.311053124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:28.311735 containerd[1468]: time="2025-01-30T13:44:28.311119816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:44:28.311735 containerd[1468]: time="2025-01-30T13:44:28.311131964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:28.311735 containerd[1468]: time="2025-01-30T13:44:28.311305796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:44:28.311735 containerd[1468]: time="2025-01-30T13:44:28.311320282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:28.311735 containerd[1468]: time="2025-01-30T13:44:28.311333326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:44:28.311735 containerd[1468]: time="2025-01-30T13:44:28.311343168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:28.312093 containerd[1468]: time="2025-01-30T13:44:28.311435967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:28.312093 containerd[1468]: time="2025-01-30T13:44:28.311662427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:28.312093 containerd[1468]: time="2025-01-30T13:44:28.311821517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:44:28.312093 containerd[1468]: time="2025-01-30T13:44:28.311835818Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:44:28.312093 containerd[1468]: time="2025-01-30T13:44:28.311936544Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:44:28.312093 containerd[1468]: time="2025-01-30T13:44:28.311990974Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:44:28.318096 containerd[1468]: time="2025-01-30T13:44:28.318064823Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:44:28.318131 containerd[1468]: time="2025-01-30T13:44:28.318104571Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:44:28.318131 containerd[1468]: time="2025-01-30T13:44:28.318123761Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:44:28.318184 containerd[1468]: time="2025-01-30T13:44:28.318138668Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:44:28.318184 containerd[1468]: time="2025-01-30T13:44:28.318152948Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:44:28.318321 containerd[1468]: time="2025-01-30T13:44:28.318294598Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:44:28.318560 containerd[1468]: time="2025-01-30T13:44:28.318539177Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:44:28.318690 containerd[1468]: time="2025-01-30T13:44:28.318658518Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:44:28.318690 containerd[1468]: time="2025-01-30T13:44:28.318679684Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:44:28.318740 containerd[1468]: time="2025-01-30T13:44:28.318708736Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:44:28.318740 containerd[1468]: time="2025-01-30T13:44:28.318723901Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:44:28.318777 containerd[1468]: time="2025-01-30T13:44:28.318740991Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:44:28.318777 containerd[1468]: time="2025-01-30T13:44:28.318754509Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:44:28.318777 containerd[1468]: time="2025-01-30T13:44:28.318766894Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:44:28.318828 containerd[1468]: time="2025-01-30T13:44:28.318779578Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:44:28.318828 containerd[1468]: time="2025-01-30T13:44:28.318797707Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:44:28.318828 containerd[1468]: time="2025-01-30T13:44:28.318809361Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:44:28.318828 containerd[1468]: time="2025-01-30T13:44:28.318819913Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:44:28.318910 containerd[1468]: time="2025-01-30T13:44:28.318839937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.318910 containerd[1468]: time="2025-01-30T13:44:28.318852857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.318910 containerd[1468]: time="2025-01-30T13:44:28.318865170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.318910 containerd[1468]: time="2025-01-30T13:44:28.318887130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.318910 containerd[1468]: time="2025-01-30T13:44:28.318899957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.319006 containerd[1468]: time="2025-01-30T13:44:28.318913155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.319006 containerd[1468]: time="2025-01-30T13:44:28.318926117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.319006 containerd[1468]: time="2025-01-30T13:44:28.318938636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.319006 containerd[1468]: time="2025-01-30T13:44:28.318954511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.319006 containerd[1468]: time="2025-01-30T13:44:28.318969295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.319006 containerd[1468]: time="2025-01-30T13:44:28.318980846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.319006 containerd[1468]: time="2025-01-30T13:44:28.318992994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.319006 containerd[1468]: time="2025-01-30T13:44:28.319004874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.319144 containerd[1468]: time="2025-01-30T13:44:28.319024270Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:44:28.319144 containerd[1468]: time="2025-01-30T13:44:28.319050904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.319144 containerd[1468]: time="2025-01-30T13:44:28.319063587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.319144 containerd[1468]: time="2025-01-30T13:44:28.319074520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:44:28.319144 containerd[1468]: time="2025-01-30T13:44:28.319128950Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:44:28.319234 containerd[1468]: time="2025-01-30T13:44:28.319145289Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:44:28.319234 containerd[1468]: time="2025-01-30T13:44:28.319156613Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:44:28.319234 containerd[1468]: time="2025-01-30T13:44:28.319169245Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:44:28.319234 containerd[1468]: time="2025-01-30T13:44:28.319179098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.319234 containerd[1468]: time="2025-01-30T13:44:28.319191554Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:44:28.319234 containerd[1468]: time="2025-01-30T13:44:28.319202087Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:44:28.319234 containerd[1468]: time="2025-01-30T13:44:28.319213215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:44:28.319544 containerd[1468]: time="2025-01-30T13:44:28.319471302Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:44:28.319544 containerd[1468]: time="2025-01-30T13:44:28.319540938Z" level=info msg="Connect containerd service" Jan 30 13:44:28.319714 containerd[1468]: time="2025-01-30T13:44:28.319575138Z" level=info msg="using legacy CRI server" Jan 30 13:44:28.319714 containerd[1468]: time="2025-01-30T13:44:28.319582612Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:44:28.319714 containerd[1468]: time="2025-01-30T13:44:28.319669173Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:44:28.320283 containerd[1468]: time="2025-01-30T13:44:28.320260027Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:44:28.320606 containerd[1468]: time="2025-01-30T13:44:28.320459299Z" level=info msg="Start subscribing containerd event" Jan 30 13:44:28.320606 containerd[1468]: time="2025-01-30T13:44:28.320532393Z" level=info msg="Start recovering state" Jan 30 13:44:28.321422 containerd[1468]: time="2025-01-30T13:44:28.320699235Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:44:28.321422 containerd[1468]: time="2025-01-30T13:44:28.320708100Z" level=info msg="Start event monitor" Jan 30 13:44:28.321422 containerd[1468]: time="2025-01-30T13:44:28.320751719Z" level=info msg="Start snapshots syncer" Jan 30 13:44:28.321422 containerd[1468]: time="2025-01-30T13:44:28.320762570Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:44:28.321422 containerd[1468]: time="2025-01-30T13:44:28.320771270Z" level=info msg="Start streaming server" Jan 30 13:44:28.321422 containerd[1468]: time="2025-01-30T13:44:28.320777395Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:44:28.321422 containerd[1468]: time="2025-01-30T13:44:28.320977551Z" level=info msg="containerd successfully booted in 0.040174s" Jan 30 13:44:28.321102 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:44:28.433317 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:44:28.457578 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:44:28.472933 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:44:28.475193 systemd[1]: Started sshd@0-10.0.0.96:22-10.0.0.1:43962.service - OpenSSH per-connection server daemon (10.0.0.1:43962). Jan 30 13:44:28.482667 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:44:28.482992 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:44:28.485612 tar[1466]: linux-amd64/LICENSE Jan 30 13:44:28.485843 tar[1466]: linux-amd64/README.md Jan 30 13:44:28.486410 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:44:28.498966 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:44:28.503207 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:44:28.516970 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:44:28.519116 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:44:28.520441 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:44:28.552315 sshd[1522]: Accepted publickey for core from 10.0.0.1 port 43962 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:28.554591 sshd[1522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:28.563528 systemd-logind[1456]: New session 1 of user core. Jan 30 13:44:28.564805 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:44:28.576912 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:44:28.589609 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:44:28.593899 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:44:28.603251 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:44:28.720078 systemd[1537]: Queued start job for default target default.target. Jan 30 13:44:28.732170 systemd[1537]: Created slice app.slice - User Application Slice. Jan 30 13:44:28.732207 systemd[1537]: Reached target paths.target - Paths. Jan 30 13:44:28.732221 systemd[1537]: Reached target timers.target - Timers. Jan 30 13:44:28.734020 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:44:28.746370 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:44:28.746504 systemd[1537]: Reached target sockets.target - Sockets. Jan 30 13:44:28.746523 systemd[1537]: Reached target basic.target - Basic System. Jan 30 13:44:28.746565 systemd[1537]: Reached target default.target - Main User Target. Jan 30 13:44:28.746601 systemd[1537]: Startup finished in 137ms. Jan 30 13:44:28.747119 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:44:28.749897 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:44:28.813422 systemd[1]: Started sshd@1-10.0.0.96:22-10.0.0.1:43964.service - OpenSSH per-connection server daemon (10.0.0.1:43964). Jan 30 13:44:28.851495 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 43964 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:28.853045 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:28.857582 systemd-logind[1456]: New session 2 of user core. Jan 30 13:44:28.866810 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:44:28.925156 sshd[1548]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:28.933936 systemd[1]: sshd@1-10.0.0.96:22-10.0.0.1:43964.service: Deactivated successfully. Jan 30 13:44:28.936008 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:44:28.937917 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:44:28.952955 systemd[1]: Started sshd@2-10.0.0.96:22-10.0.0.1:43974.service - OpenSSH per-connection server daemon (10.0.0.1:43974). Jan 30 13:44:28.955236 systemd-logind[1456]: Removed session 2. Jan 30 13:44:28.983599 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 43974 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:28.985467 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:28.989618 systemd-logind[1456]: New session 3 of user core. Jan 30 13:44:29.001827 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:44:29.059153 sshd[1555]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:29.063362 systemd[1]: sshd@2-10.0.0.96:22-10.0.0.1:43974.service: Deactivated successfully. Jan 30 13:44:29.065169 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:44:29.065926 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:44:29.066945 systemd-logind[1456]: Removed session 3. Jan 30 13:44:29.501030 systemd-networkd[1405]: eth0: Gained IPv6LL Jan 30 13:44:29.504327 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:44:29.506239 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:44:29.520981 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:44:29.523662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:44:29.525890 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:44:29.543765 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:44:29.544017 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:44:29.545831 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:44:29.549476 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:44:30.143493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:44:30.145146 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:44:30.146491 systemd[1]: Startup finished in 689ms (kernel) + 5.216s (initrd) + 4.267s (userspace) = 10.173s. Jan 30 13:44:30.151196 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:44:30.552880 kubelet[1583]: E0130 13:44:30.552822 1583 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:44:30.557044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:44:30.557256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:44:39.213082 systemd[1]: Started sshd@3-10.0.0.96:22-10.0.0.1:43772.service - OpenSSH per-connection server daemon (10.0.0.1:43772). Jan 30 13:44:39.244286 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 43772 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:39.245690 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:39.249229 systemd-logind[1456]: New session 4 of user core. Jan 30 13:44:39.258792 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:44:39.311164 sshd[1596]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:39.322182 systemd[1]: sshd@3-10.0.0.96:22-10.0.0.1:43772.service: Deactivated successfully. Jan 30 13:44:39.323964 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:44:39.325446 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:44:39.335902 systemd[1]: Started sshd@4-10.0.0.96:22-10.0.0.1:43784.service - OpenSSH per-connection server daemon (10.0.0.1:43784). Jan 30 13:44:39.336644 systemd-logind[1456]: Removed session 4. Jan 30 13:44:39.363431 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 43784 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:39.364869 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:39.368971 systemd-logind[1456]: New session 5 of user core. Jan 30 13:44:39.378829 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:44:39.428653 sshd[1603]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:39.441156 systemd[1]: sshd@4-10.0.0.96:22-10.0.0.1:43784.service: Deactivated successfully. Jan 30 13:44:39.442910 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:44:39.444503 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:44:39.455897 systemd[1]: Started sshd@5-10.0.0.96:22-10.0.0.1:43798.service - OpenSSH per-connection server daemon (10.0.0.1:43798). Jan 30 13:44:39.456746 systemd-logind[1456]: Removed session 5. Jan 30 13:44:39.482725 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 43798 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:39.484026 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:39.487570 systemd-logind[1456]: New session 6 of user core. Jan 30 13:44:39.502793 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:44:39.556827 sshd[1610]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:39.572317 systemd[1]: sshd@5-10.0.0.96:22-10.0.0.1:43798.service: Deactivated successfully. Jan 30 13:44:39.574137 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:44:39.575833 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:44:39.577028 systemd[1]: Started sshd@6-10.0.0.96:22-10.0.0.1:43802.service - OpenSSH per-connection server daemon (10.0.0.1:43802). Jan 30 13:44:39.577731 systemd-logind[1456]: Removed session 6. Jan 30 13:44:39.608064 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 43802 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:39.609516 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:39.613058 systemd-logind[1456]: New session 7 of user core. Jan 30 13:44:39.622790 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:44:39.679764 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:44:39.680103 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:44:39.949875 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:44:39.950057 (dockerd)[1639]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:44:40.216499 dockerd[1639]: time="2025-01-30T13:44:40.216359851Z" level=info msg="Starting up" Jan 30 13:44:40.316581 dockerd[1639]: time="2025-01-30T13:44:40.316535849Z" level=info msg="Loading containers: start." Jan 30 13:44:40.425700 kernel: Initializing XFRM netlink socket Jan 30 13:44:40.504655 systemd-networkd[1405]: docker0: Link UP Jan 30 13:44:40.527069 dockerd[1639]: time="2025-01-30T13:44:40.527025179Z" level=info msg="Loading containers: done." Jan 30 13:44:40.540552 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck176201515-merged.mount: Deactivated successfully. Jan 30 13:44:40.542125 dockerd[1639]: time="2025-01-30T13:44:40.542090061Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:44:40.542186 dockerd[1639]: time="2025-01-30T13:44:40.542159685Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:44:40.542280 dockerd[1639]: time="2025-01-30T13:44:40.542256410Z" level=info msg="Daemon has completed initialization" Jan 30 13:44:40.576286 dockerd[1639]: time="2025-01-30T13:44:40.576225590Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:44:40.576365 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:44:40.577111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:44:40.582836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:44:40.728798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:44:40.732982 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:44:40.778445 kubelet[1792]: E0130 13:44:40.778318 1792 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:44:40.784901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:44:40.785096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:44:41.429229 containerd[1468]: time="2025-01-30T13:44:41.429191143Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:44:42.195761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2178777871.mount: Deactivated successfully. Jan 30 13:44:43.112616 containerd[1468]: time="2025-01-30T13:44:43.112567907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:43.113383 containerd[1468]: time="2025-01-30T13:44:43.113342888Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 30 13:44:43.114357 containerd[1468]: time="2025-01-30T13:44:43.114331993Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:43.116787 containerd[1468]: time="2025-01-30T13:44:43.116762201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:43.118631 containerd[1468]: time="2025-01-30T13:44:43.118592104Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.689361922s" Jan 30 13:44:43.118662 containerd[1468]: time="2025-01-30T13:44:43.118637084Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 30 13:44:43.119926 containerd[1468]: time="2025-01-30T13:44:43.119897601Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:44:44.574170 containerd[1468]: time="2025-01-30T13:44:44.574117339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:44.584817 containerd[1468]: time="2025-01-30T13:44:44.584774501Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 30 13:44:44.585834 containerd[1468]: time="2025-01-30T13:44:44.585802187Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:44.588662 containerd[1468]: time="2025-01-30T13:44:44.588616680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:44.589700 containerd[1468]: time="2025-01-30T13:44:44.589647019Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.469718732s" Jan 30 13:44:44.589758 containerd[1468]: time="2025-01-30T13:44:44.589703388Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 30 13:44:44.590287 containerd[1468]: time="2025-01-30T13:44:44.590129778Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:44:45.745808 containerd[1468]: time="2025-01-30T13:44:45.745755425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:45.746519 containerd[1468]: time="2025-01-30T13:44:45.746478963Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 30 13:44:45.747686 containerd[1468]: time="2025-01-30T13:44:45.747648341Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:45.750111 containerd[1468]: time="2025-01-30T13:44:45.750079151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:45.751142 containerd[1468]: time="2025-01-30T13:44:45.751090013Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.160928848s" Jan 30 13:44:45.751179 containerd[1468]: time="2025-01-30T13:44:45.751141364Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 30 13:44:45.751782 containerd[1468]: time="2025-01-30T13:44:45.751611469Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:44:46.712344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3187251821.mount: Deactivated successfully. Jan 30 13:44:47.337088 containerd[1468]: time="2025-01-30T13:44:47.337018507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:47.337798 containerd[1468]: time="2025-01-30T13:44:47.337742296Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 30 13:44:47.339062 containerd[1468]: time="2025-01-30T13:44:47.339030380Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:47.340965 containerd[1468]: time="2025-01-30T13:44:47.340934446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:47.341560 containerd[1468]: time="2025-01-30T13:44:47.341516381Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.589875522s" Jan 30 13:44:47.341598 containerd[1468]: time="2025-01-30T13:44:47.341560087Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 13:44:47.342074 containerd[1468]: time="2025-01-30T13:44:47.342038113Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:44:47.932552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1936992480.mount: Deactivated successfully. Jan 30 13:44:48.555410 containerd[1468]: time="2025-01-30T13:44:48.555342847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:48.556061 containerd[1468]: time="2025-01-30T13:44:48.556018519Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:44:48.557289 containerd[1468]: time="2025-01-30T13:44:48.557239710Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:48.560041 containerd[1468]: time="2025-01-30T13:44:48.560006859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:48.560921 containerd[1468]: time="2025-01-30T13:44:48.560892966Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.218827515s" Jan 30 13:44:48.560982 containerd[1468]: time="2025-01-30T13:44:48.560921775Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:44:48.561432 containerd[1468]: time="2025-01-30T13:44:48.561390173Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:44:49.118785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882121242.mount: Deactivated successfully. Jan 30 13:44:49.143690 containerd[1468]: time="2025-01-30T13:44:49.143637003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:49.144493 containerd[1468]: time="2025-01-30T13:44:49.144441420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:44:49.145682 containerd[1468]: time="2025-01-30T13:44:49.145626906Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:49.147923 containerd[1468]: time="2025-01-30T13:44:49.147881444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:49.148568 containerd[1468]: time="2025-01-30T13:44:49.148514073Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 587.091028ms" Jan 30 13:44:49.148568 containerd[1468]: time="2025-01-30T13:44:49.148540427Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:44:49.149003 containerd[1468]: time="2025-01-30T13:44:49.148961697Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:44:49.959181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235837293.mount: Deactivated successfully. Jan 30 13:44:50.799969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:44:50.815833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:44:51.050775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:44:51.055920 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:44:51.194288 kubelet[1978]: E0130 13:44:51.194220 1978 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:44:51.198411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:44:51.198606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:44:52.378954 containerd[1468]: time="2025-01-30T13:44:52.378905494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:52.379938 containerd[1468]: time="2025-01-30T13:44:52.379887771Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 30 13:44:52.381272 containerd[1468]: time="2025-01-30T13:44:52.381240165Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:52.384241 containerd[1468]: time="2025-01-30T13:44:52.384197047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:52.385350 containerd[1468]: time="2025-01-30T13:44:52.385317448Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.236331006s" Jan 30 13:44:52.385387 containerd[1468]: time="2025-01-30T13:44:52.385350045Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 30 13:44:55.057252 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:44:55.065879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:44:55.093559 systemd[1]: Reloading requested from client PID 2020 ('systemctl') (unit session-7.scope)... Jan 30 13:44:55.093576 systemd[1]: Reloading... Jan 30 13:44:55.176748 zram_generator::config[2062]: No configuration found. Jan 30 13:44:55.407082 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:44:55.482837 systemd[1]: Reloading finished in 388 ms. Jan 30 13:44:55.535706 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:44:55.535797 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:44:55.536098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:44:55.537640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:44:55.687233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:44:55.691490 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:44:55.725909 kubelet[2107]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:44:55.725909 kubelet[2107]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:44:55.725909 kubelet[2107]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:44:55.726874 kubelet[2107]: I0130 13:44:55.726826 2107 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:44:55.894316 kubelet[2107]: I0130 13:44:55.894269 2107 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:44:55.894316 kubelet[2107]: I0130 13:44:55.894297 2107 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:44:55.894529 kubelet[2107]: I0130 13:44:55.894508 2107 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:44:55.915342 kubelet[2107]: E0130 13:44:55.915300 2107 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:44:55.919355 kubelet[2107]: I0130 13:44:55.919301 2107 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:44:55.925044 kubelet[2107]: E0130 13:44:55.925005 2107 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:44:55.925044 kubelet[2107]: I0130 13:44:55.925039 2107 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:44:55.930830 kubelet[2107]: I0130 13:44:55.930793 2107 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:44:55.931680 kubelet[2107]: I0130 13:44:55.931638 2107 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:44:55.931850 kubelet[2107]: I0130 13:44:55.931807 2107 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:44:55.932039 kubelet[2107]: I0130 13:44:55.931838 2107 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:44:55.932039 kubelet[2107]: I0130 13:44:55.932036 2107 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:44:55.932146 kubelet[2107]: I0130 13:44:55.932045 2107 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:44:55.932193 kubelet[2107]: I0130 13:44:55.932177 2107 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:44:55.933411 kubelet[2107]: I0130 13:44:55.933385 2107 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:44:55.933411 kubelet[2107]: I0130 13:44:55.933408 2107 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:44:55.933463 kubelet[2107]: I0130 13:44:55.933447 2107 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:44:55.933463 kubelet[2107]: I0130 13:44:55.933458 2107 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:44:55.936894 kubelet[2107]: W0130 13:44:55.936846 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Jan 30 13:44:55.936946 kubelet[2107]: E0130 13:44:55.936912 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:44:55.938064 kubelet[2107]: W0130 13:44:55.937970 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Jan 30 13:44:55.938064 kubelet[2107]: E0130 13:44:55.938010 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:44:55.940004 kubelet[2107]: I0130 13:44:55.939959 2107 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:44:55.941864 kubelet[2107]: I0130 13:44:55.941838 2107 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:44:55.942340 kubelet[2107]: W0130 13:44:55.942317 2107 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:44:55.943454 kubelet[2107]: I0130 13:44:55.943430 2107 server.go:1269] "Started kubelet" Jan 30 13:44:55.944297 kubelet[2107]: I0130 13:44:55.943734 2107 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:44:55.945060 kubelet[2107]: I0130 13:44:55.945030 2107 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:44:55.945981 kubelet[2107]: I0130 13:44:55.945902 2107 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:44:55.946408 kubelet[2107]: I0130 13:44:55.946390 2107 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:44:55.948551 kubelet[2107]: I0130 13:44:55.948524 2107 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:44:55.949494 kubelet[2107]: I0130 13:44:55.948756 2107 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:44:55.949565 kubelet[2107]: I0130 13:44:55.949512 2107 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:44:55.949722 kubelet[2107]: E0130 13:44:55.949703 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:55.950942 kubelet[2107]: I0130 13:44:55.950304 2107 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:44:55.950942 kubelet[2107]: I0130 13:44:55.950373 2107 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:44:55.950942 kubelet[2107]: E0130 13:44:55.946830 2107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.96:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.96:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7c576269c4b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:44:55.943398584 +0000 UTC m=+0.247924519,LastTimestamp:2025-01-30 13:44:55.943398584 +0000 UTC m=+0.247924519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:44:55.950942 kubelet[2107]: E0130 13:44:55.950703 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="200ms" Jan 30 13:44:55.950942 kubelet[2107]: W0130 13:44:55.950867 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Jan 30 13:44:55.951438 kubelet[2107]: E0130 13:44:55.951148 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:44:55.951831 kubelet[2107]: E0130 13:44:55.951806 2107 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:44:55.952045 kubelet[2107]: I0130 13:44:55.952027 2107 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:44:55.952107 kubelet[2107]: I0130 13:44:55.952091 2107 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:44:55.952893 kubelet[2107]: I0130 13:44:55.952875 2107 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:44:55.966072 kubelet[2107]: I0130 13:44:55.965919 2107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:44:55.967629 kubelet[2107]: I0130 13:44:55.967597 2107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:44:55.967696 kubelet[2107]: I0130 13:44:55.967641 2107 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:44:55.967758 kubelet[2107]: I0130 13:44:55.967741 2107 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:44:55.967818 kubelet[2107]: E0130 13:44:55.967788 2107 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:44:55.970460 kubelet[2107]: I0130 13:44:55.970419 2107 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:44:55.970460 kubelet[2107]: I0130 13:44:55.970436 2107 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:44:55.970460 kubelet[2107]: I0130 13:44:55.970453 2107 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:44:55.971815 kubelet[2107]: W0130 13:44:55.971765 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Jan 30 13:44:55.972102 kubelet[2107]: E0130 13:44:55.971817 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:44:56.050343 kubelet[2107]: E0130 13:44:56.050299 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:56.068772 kubelet[2107]: E0130 13:44:56.068738 2107 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:44:56.151168 kubelet[2107]: E0130 13:44:56.151087 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:56.151510 kubelet[2107]: E0130 13:44:56.151437 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="400ms" Jan 30 13:44:56.251558 kubelet[2107]: E0130 13:44:56.251421 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:56.269870 kubelet[2107]: E0130 13:44:56.269823 2107 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:44:56.352243 kubelet[2107]: E0130 13:44:56.352183 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:56.452352 kubelet[2107]: E0130 13:44:56.452302 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:56.511907 kubelet[2107]: I0130 13:44:56.511838 2107 policy_none.go:49] "None policy: Start" Jan 30 13:44:56.512617 kubelet[2107]: I0130 13:44:56.512590 2107 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:44:56.512617 kubelet[2107]: I0130 13:44:56.512613 2107 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:44:56.552473 kubelet[2107]: E0130 13:44:56.552429 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:56.552876 kubelet[2107]: E0130 13:44:56.552823 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="800ms" Jan 30 13:44:56.585860 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:44:56.601363 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:44:56.618436 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:44:56.619743 kubelet[2107]: I0130 13:44:56.619657 2107 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:44:56.620059 kubelet[2107]: I0130 13:44:56.620030 2107 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:44:56.620090 kubelet[2107]: I0130 13:44:56.620052 2107 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:44:56.620266 kubelet[2107]: I0130 13:44:56.620249 2107 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:44:56.621409 kubelet[2107]: E0130 13:44:56.621364 2107 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:44:56.677306 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 30 13:44:56.708012 systemd[1]: Created slice kubepods-burstable-podb7c86b90fcf8ce890bc1aef7faeeb0a3.slice - libcontainer container kubepods-burstable-podb7c86b90fcf8ce890bc1aef7faeeb0a3.slice. Jan 30 13:44:56.719632 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 30 13:44:56.722388 kubelet[2107]: I0130 13:44:56.722360 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:44:56.722720 kubelet[2107]: E0130 13:44:56.722687 2107 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Jan 30 13:44:56.754025 kubelet[2107]: I0130 13:44:56.753997 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:44:56.754338 kubelet[2107]: I0130 13:44:56.754026 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7c86b90fcf8ce890bc1aef7faeeb0a3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7c86b90fcf8ce890bc1aef7faeeb0a3\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:44:56.754338 kubelet[2107]: I0130 13:44:56.754045 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:44:56.754338 kubelet[2107]: I0130 13:44:56.754063 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:44:56.754338 kubelet[2107]: I0130 13:44:56.754078 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:44:56.754338 kubelet[2107]: I0130 13:44:56.754095 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:44:56.754451 kubelet[2107]: I0130 13:44:56.754110 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7c86b90fcf8ce890bc1aef7faeeb0a3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7c86b90fcf8ce890bc1aef7faeeb0a3\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:44:56.754451 kubelet[2107]: I0130 13:44:56.754126 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7c86b90fcf8ce890bc1aef7faeeb0a3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b7c86b90fcf8ce890bc1aef7faeeb0a3\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:44:56.754451 kubelet[2107]: I0130 13:44:56.754141 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:44:56.846676 kubelet[2107]: W0130 13:44:56.846619 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Jan 30 13:44:56.846768 kubelet[2107]: E0130 13:44:56.846718 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:44:56.924751 kubelet[2107]: I0130 13:44:56.924724 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:44:56.925124 kubelet[2107]: E0130 13:44:56.925082 2107 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Jan 30 13:44:56.941983 kubelet[2107]: W0130 13:44:56.941909 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Jan 30 13:44:56.942030 kubelet[2107]: E0130 13:44:56.941986 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:44:57.005443 kubelet[2107]: E0130 13:44:57.005412 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:57.006129 containerd[1468]: time="2025-01-30T13:44:57.006087693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 30 13:44:57.017402 kubelet[2107]: E0130 13:44:57.017364 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:57.017796 containerd[1468]: time="2025-01-30T13:44:57.017758718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b7c86b90fcf8ce890bc1aef7faeeb0a3,Namespace:kube-system,Attempt:0,}" Jan 30 13:44:57.022130 kubelet[2107]: E0130 13:44:57.022103 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:57.024210 containerd[1468]: time="2025-01-30T13:44:57.024168425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 30 13:44:57.144172 kubelet[2107]: W0130 13:44:57.144078 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Jan 30 13:44:57.144172 kubelet[2107]: E0130 13:44:57.144115 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:44:57.266549 kubelet[2107]: W0130 13:44:57.266472 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Jan 30 13:44:57.266604 kubelet[2107]: E0130 13:44:57.266554 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:44:57.327037 kubelet[2107]: I0130 13:44:57.326999 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:44:57.327336 kubelet[2107]: E0130 13:44:57.327299 2107 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Jan 30 13:44:57.354048 kubelet[2107]: E0130 13:44:57.353994 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="1.6s" Jan 30 13:44:57.640725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660091617.mount: Deactivated successfully. Jan 30 13:44:57.646097 containerd[1468]: time="2025-01-30T13:44:57.646062886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:44:57.646803 containerd[1468]: time="2025-01-30T13:44:57.646743271Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:44:57.647715 containerd[1468]: time="2025-01-30T13:44:57.647653108Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:44:57.648534 containerd[1468]: time="2025-01-30T13:44:57.648505403Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:44:57.649219 containerd[1468]: time="2025-01-30T13:44:57.649177309Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:44:57.650018 containerd[1468]: time="2025-01-30T13:44:57.649995070Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:44:57.651034 containerd[1468]: time="2025-01-30T13:44:57.650979358Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:44:57.653735 containerd[1468]: time="2025-01-30T13:44:57.653698544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:44:57.655406 containerd[1468]: time="2025-01-30T13:44:57.655380450Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 637.572271ms" Jan 30 13:44:57.656709 containerd[1468]: time="2025-01-30T13:44:57.656662447Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 650.501695ms" Jan 30 13:44:57.659272 containerd[1468]: time="2025-01-30T13:44:57.659231626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.993319ms" Jan 30 13:44:57.772535 containerd[1468]: time="2025-01-30T13:44:57.772349088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:44:57.772535 containerd[1468]: time="2025-01-30T13:44:57.772431009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:44:57.772535 containerd[1468]: time="2025-01-30T13:44:57.772446186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:44:57.773151 containerd[1468]: time="2025-01-30T13:44:57.772521230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:44:57.774030 containerd[1468]: time="2025-01-30T13:44:57.773863916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:44:57.774030 containerd[1468]: time="2025-01-30T13:44:57.773911332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:44:57.774030 containerd[1468]: time="2025-01-30T13:44:57.773924955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:44:57.774030 containerd[1468]: time="2025-01-30T13:44:57.774004058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:44:57.775064 containerd[1468]: time="2025-01-30T13:44:57.774296695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:44:57.775064 containerd[1468]: time="2025-01-30T13:44:57.775018592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:44:57.775064 containerd[1468]: time="2025-01-30T13:44:57.775030671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:44:57.775229 containerd[1468]: time="2025-01-30T13:44:57.775116772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:44:57.804828 systemd[1]: Started cri-containerd-4001ee6d6f2b9061a57db2159febdfe87abf5a3f80068c42d418cfe58170233e.scope - libcontainer container 4001ee6d6f2b9061a57db2159febdfe87abf5a3f80068c42d418cfe58170233e. Jan 30 13:44:57.806655 systemd[1]: Started cri-containerd-c6f346edad2a9fc814457415cc3f258316ccdc152e8f229b6b908499d57a70dd.scope - libcontainer container c6f346edad2a9fc814457415cc3f258316ccdc152e8f229b6b908499d57a70dd. Jan 30 13:44:57.810820 systemd[1]: Started cri-containerd-2fb96fa4973077901430089fc87d0f6b53e604644bebae583f166fc645bb776d.scope - libcontainer container 2fb96fa4973077901430089fc87d0f6b53e604644bebae583f166fc645bb776d. Jan 30 13:44:57.843363 containerd[1468]: time="2025-01-30T13:44:57.843287222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4001ee6d6f2b9061a57db2159febdfe87abf5a3f80068c42d418cfe58170233e\"" Jan 30 13:44:57.845052 kubelet[2107]: E0130 13:44:57.844965 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:57.847489 containerd[1468]: time="2025-01-30T13:44:57.847448115Z" level=info msg="CreateContainer within sandbox \"4001ee6d6f2b9061a57db2159febdfe87abf5a3f80068c42d418cfe58170233e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:44:57.847848 containerd[1468]: time="2025-01-30T13:44:57.847789350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6f346edad2a9fc814457415cc3f258316ccdc152e8f229b6b908499d57a70dd\"" Jan 30 13:44:57.848473 kubelet[2107]: E0130 13:44:57.848436 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:57.850233 containerd[1468]: time="2025-01-30T13:44:57.850180843Z" level=info msg="CreateContainer within sandbox \"c6f346edad2a9fc814457415cc3f258316ccdc152e8f229b6b908499d57a70dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:44:57.855513 containerd[1468]: time="2025-01-30T13:44:57.855472604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b7c86b90fcf8ce890bc1aef7faeeb0a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fb96fa4973077901430089fc87d0f6b53e604644bebae583f166fc645bb776d\"" Jan 30 13:44:57.856085 kubelet[2107]: E0130 13:44:57.856060 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:57.857439 containerd[1468]: time="2025-01-30T13:44:57.857393625Z" level=info msg="CreateContainer within sandbox \"2fb96fa4973077901430089fc87d0f6b53e604644bebae583f166fc645bb776d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:44:57.871107 containerd[1468]: time="2025-01-30T13:44:57.871037507Z" level=info msg="CreateContainer within sandbox \"4001ee6d6f2b9061a57db2159febdfe87abf5a3f80068c42d418cfe58170233e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9cefc197eb297625ed610cf2d0c2c1de5aa1b42f18370449c47a24c09e0f8f0b\"" Jan 30 13:44:57.871525 containerd[1468]: time="2025-01-30T13:44:57.871499548Z" level=info msg="StartContainer for \"9cefc197eb297625ed610cf2d0c2c1de5aa1b42f18370449c47a24c09e0f8f0b\"" Jan 30 13:44:57.875410 containerd[1468]: time="2025-01-30T13:44:57.875285343Z" level=info msg="CreateContainer within sandbox \"c6f346edad2a9fc814457415cc3f258316ccdc152e8f229b6b908499d57a70dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0c34346513146d5077afd7e3889044536f9d95c8d56a99c475701e48974e91d0\"" Jan 30 13:44:57.875799 containerd[1468]: time="2025-01-30T13:44:57.875760296Z" level=info msg="StartContainer for \"0c34346513146d5077afd7e3889044536f9d95c8d56a99c475701e48974e91d0\"" Jan 30 13:44:57.880145 containerd[1468]: time="2025-01-30T13:44:57.880089050Z" level=info msg="CreateContainer within sandbox \"2fb96fa4973077901430089fc87d0f6b53e604644bebae583f166fc645bb776d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7bc1c0f75ba5917adfbff303510520c138d8562f7817affc08ece60d4a7ff0c5\"" Jan 30 13:44:57.880740 containerd[1468]: time="2025-01-30T13:44:57.880434675Z" level=info msg="StartContainer for \"7bc1c0f75ba5917adfbff303510520c138d8562f7817affc08ece60d4a7ff0c5\"" Jan 30 13:44:57.909831 systemd[1]: Started cri-containerd-9cefc197eb297625ed610cf2d0c2c1de5aa1b42f18370449c47a24c09e0f8f0b.scope - libcontainer container 9cefc197eb297625ed610cf2d0c2c1de5aa1b42f18370449c47a24c09e0f8f0b. Jan 30 13:44:57.920867 systemd[1]: Started cri-containerd-0c34346513146d5077afd7e3889044536f9d95c8d56a99c475701e48974e91d0.scope - libcontainer container 0c34346513146d5077afd7e3889044536f9d95c8d56a99c475701e48974e91d0. Jan 30 13:44:57.922038 systemd[1]: Started cri-containerd-7bc1c0f75ba5917adfbff303510520c138d8562f7817affc08ece60d4a7ff0c5.scope - libcontainer container 7bc1c0f75ba5917adfbff303510520c138d8562f7817affc08ece60d4a7ff0c5. Jan 30 13:44:57.960617 containerd[1468]: time="2025-01-30T13:44:57.960547384Z" level=info msg="StartContainer for \"9cefc197eb297625ed610cf2d0c2c1de5aa1b42f18370449c47a24c09e0f8f0b\" returns successfully" Jan 30 13:44:57.970381 containerd[1468]: time="2025-01-30T13:44:57.970336210Z" level=info msg="StartContainer for \"0c34346513146d5077afd7e3889044536f9d95c8d56a99c475701e48974e91d0\" returns successfully" Jan 30 13:44:57.977748 containerd[1468]: time="2025-01-30T13:44:57.975538533Z" level=info msg="StartContainer for \"7bc1c0f75ba5917adfbff303510520c138d8562f7817affc08ece60d4a7ff0c5\" returns successfully" Jan 30 13:44:57.979304 kubelet[2107]: E0130 13:44:57.979280 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:57.981944 kubelet[2107]: E0130 13:44:57.981909 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:57.984734 kubelet[2107]: E0130 13:44:57.984708 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:58.130897 kubelet[2107]: I0130 13:44:58.130293 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:44:58.985513 kubelet[2107]: E0130 13:44:58.985484 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:59.229088 kubelet[2107]: E0130 13:44:59.229017 2107 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:44:59.286220 kubelet[2107]: E0130 13:44:59.286127 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:59.320074 kubelet[2107]: I0130 13:44:59.320009 2107 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 13:44:59.320074 kubelet[2107]: E0130 13:44:59.320067 2107 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 30 13:44:59.328021 kubelet[2107]: E0130 13:44:59.327982 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:59.428747 kubelet[2107]: E0130 13:44:59.428710 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:59.529828 kubelet[2107]: E0130 13:44:59.529783 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:59.629927 kubelet[2107]: E0130 13:44:59.629857 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:59.730518 kubelet[2107]: E0130 13:44:59.730456 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:59.831165 kubelet[2107]: E0130 13:44:59.831106 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:59.932080 kubelet[2107]: E0130 13:44:59.931971 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:59.987034 kubelet[2107]: E0130 13:44:59.986989 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:00.032160 kubelet[2107]: E0130 13:45:00.032108 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:45:00.936254 kubelet[2107]: I0130 13:45:00.936188 2107 apiserver.go:52] "Watching apiserver" Jan 30 13:45:00.951257 kubelet[2107]: I0130 13:45:00.951209 2107 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:45:01.132969 systemd[1]: Reloading requested from client PID 2383 ('systemctl') (unit session-7.scope)... Jan 30 13:45:01.132983 systemd[1]: Reloading... Jan 30 13:45:01.210708 zram_generator::config[2425]: No configuration found. Jan 30 13:45:01.319644 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:45:01.411427 systemd[1]: Reloading finished in 278 ms. Jan 30 13:45:01.458964 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:45:01.483162 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:45:01.483482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:45:01.494866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:45:01.635577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:45:01.640077 (kubelet)[2467]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:45:01.672124 kubelet[2467]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:45:01.672124 kubelet[2467]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:45:01.672124 kubelet[2467]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:45:01.672502 kubelet[2467]: I0130 13:45:01.672169 2467 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:45:01.678003 kubelet[2467]: I0130 13:45:01.677961 2467 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:45:01.678003 kubelet[2467]: I0130 13:45:01.677990 2467 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:45:01.678285 kubelet[2467]: I0130 13:45:01.678265 2467 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:45:01.680809 kubelet[2467]: I0130 13:45:01.680769 2467 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:45:01.682701 kubelet[2467]: I0130 13:45:01.682682 2467 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:45:01.685261 kubelet[2467]: E0130 13:45:01.685215 2467 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:45:01.685261 kubelet[2467]: I0130 13:45:01.685241 2467 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:45:01.689659 kubelet[2467]: I0130 13:45:01.689640 2467 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:45:01.689782 kubelet[2467]: I0130 13:45:01.689766 2467 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:45:01.689910 kubelet[2467]: I0130 13:45:01.689881 2467 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:45:01.690053 kubelet[2467]: I0130 13:45:01.689908 2467 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:45:01.690143 kubelet[2467]: I0130 13:45:01.690056 2467 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:45:01.690143 kubelet[2467]: I0130 13:45:01.690065 2467 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:45:01.690143 kubelet[2467]: I0130 13:45:01.690101 2467 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:45:01.690208 kubelet[2467]: I0130 13:45:01.690196 2467 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:45:01.690208 kubelet[2467]: I0130 13:45:01.690206 2467 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:45:01.690250 kubelet[2467]: I0130 13:45:01.690234 2467 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:45:01.690250 kubelet[2467]: I0130 13:45:01.690247 2467 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:45:01.691054 kubelet[2467]: I0130 13:45:01.691007 2467 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:45:01.692038 kubelet[2467]: I0130 13:45:01.691346 2467 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:45:01.692038 kubelet[2467]: I0130 13:45:01.691773 2467 server.go:1269] "Started kubelet" Jan 30 13:45:01.692038 kubelet[2467]: I0130 13:45:01.691984 2467 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:45:01.692372 kubelet[2467]: I0130 13:45:01.692264 2467 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:45:01.692745 kubelet[2467]: I0130 13:45:01.692553 2467 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:45:01.696032 kubelet[2467]: E0130 13:45:01.696017 2467 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:45:01.696856 kubelet[2467]: I0130 13:45:01.696835 2467 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:45:01.697846 kubelet[2467]: I0130 13:45:01.697830 2467 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:45:01.698956 kubelet[2467]: I0130 13:45:01.698943 2467 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:45:01.699209 kubelet[2467]: E0130 13:45:01.699193 2467 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:45:01.699568 kubelet[2467]: I0130 13:45:01.699554 2467 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:45:01.703208 kubelet[2467]: I0130 13:45:01.703125 2467 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:45:01.703246 kubelet[2467]: I0130 13:45:01.703220 2467 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:45:01.703417 kubelet[2467]: I0130 13:45:01.702924 2467 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:45:01.705306 kubelet[2467]: I0130 13:45:01.699925 2467 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:45:01.705306 kubelet[2467]: I0130 13:45:01.704440 2467 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:45:01.715304 kubelet[2467]: I0130 13:45:01.715274 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:45:01.716619 kubelet[2467]: I0130 13:45:01.716592 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:45:01.716619 kubelet[2467]: I0130 13:45:01.716621 2467 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:45:01.716694 kubelet[2467]: I0130 13:45:01.716638 2467 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:45:01.716721 kubelet[2467]: E0130 13:45:01.716696 2467 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:45:01.742052 kubelet[2467]: I0130 13:45:01.742020 2467 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:45:01.742052 kubelet[2467]: I0130 13:45:01.742042 2467 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:45:01.742052 kubelet[2467]: I0130 13:45:01.742061 2467 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:45:01.742233 kubelet[2467]: I0130 13:45:01.742214 2467 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:45:01.742255 kubelet[2467]: I0130 13:45:01.742231 2467 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:45:01.742255 kubelet[2467]: I0130 13:45:01.742250 2467 policy_none.go:49] "None policy: Start" Jan 30 13:45:01.742912 kubelet[2467]: I0130 13:45:01.742880 2467 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:45:01.742912 kubelet[2467]: I0130 13:45:01.742906 2467 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:45:01.743165 kubelet[2467]: I0130 13:45:01.743144 2467 state_mem.go:75] "Updated machine memory state" Jan 30 13:45:01.747482 kubelet[2467]: I0130 13:45:01.747451 2467 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:45:01.747647 kubelet[2467]: I0130 13:45:01.747628 2467 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:45:01.747741 kubelet[2467]: I0130 13:45:01.747646 2467 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:45:01.748283 kubelet[2467]: I0130 13:45:01.748210 2467 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:45:01.853642 kubelet[2467]: I0130 13:45:01.853596 2467 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:45:01.859032 kubelet[2467]: I0130 13:45:01.859008 2467 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 30 13:45:01.859173 kubelet[2467]: I0130 13:45:01.859157 2467 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 13:45:01.905295 kubelet[2467]: I0130 13:45:01.905168 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7c86b90fcf8ce890bc1aef7faeeb0a3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b7c86b90fcf8ce890bc1aef7faeeb0a3\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:01.905295 kubelet[2467]: I0130 13:45:01.905206 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:01.905295 kubelet[2467]: I0130 13:45:01.905224 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:01.905295 kubelet[2467]: I0130 13:45:01.905242 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:01.905295 kubelet[2467]: I0130 13:45:01.905257 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:01.905520 kubelet[2467]: I0130 13:45:01.905274 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:45:01.905520 kubelet[2467]: I0130 13:45:01.905294 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7c86b90fcf8ce890bc1aef7faeeb0a3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7c86b90fcf8ce890bc1aef7faeeb0a3\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:01.905520 kubelet[2467]: I0130 13:45:01.905338 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:01.905520 kubelet[2467]: I0130 13:45:01.905373 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7c86b90fcf8ce890bc1aef7faeeb0a3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7c86b90fcf8ce890bc1aef7faeeb0a3\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:02.125451 kubelet[2467]: E0130 13:45:02.125176 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:02.125451 kubelet[2467]: E0130 13:45:02.125364 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:02.125713 kubelet[2467]: E0130 13:45:02.125660 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:02.691137 kubelet[2467]: I0130 13:45:02.691095 2467 apiserver.go:52] "Watching apiserver" Jan 30 13:45:02.704260 kubelet[2467]: I0130 13:45:02.704214 2467 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:45:02.727390 kubelet[2467]: E0130 13:45:02.727012 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:02.727390 kubelet[2467]: E0130 13:45:02.727230 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:02.732581 kubelet[2467]: E0130 13:45:02.731887 2467 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:02.732581 kubelet[2467]: E0130 13:45:02.732015 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:02.743696 kubelet[2467]: I0130 13:45:02.743502 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7434783710000001 podStartE2EDuration="1.743478371s" podCreationTimestamp="2025-01-30 13:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:45:02.743436394 +0000 UTC m=+1.099503087" watchObservedRunningTime="2025-01-30 13:45:02.743478371 +0000 UTC m=+1.099545064" Jan 30 13:45:02.757770 kubelet[2467]: I0130 13:45:02.757568 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.757536878 podStartE2EDuration="1.757536878s" podCreationTimestamp="2025-01-30 13:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:45:02.757046324 +0000 UTC m=+1.113113017" watchObservedRunningTime="2025-01-30 13:45:02.757536878 +0000 UTC m=+1.113603571" Jan 30 13:45:02.757770 kubelet[2467]: I0130 13:45:02.757701 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.757696294 podStartE2EDuration="1.757696294s" podCreationTimestamp="2025-01-30 13:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:45:02.749008016 +0000 UTC m=+1.105074709" watchObservedRunningTime="2025-01-30 13:45:02.757696294 +0000 UTC m=+1.113762977" Jan 30 13:45:02.772935 sudo[1620]: pam_unix(sudo:session): session closed for user root Jan 30 13:45:02.774541 sshd[1617]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:02.778212 systemd[1]: sshd@6-10.0.0.96:22-10.0.0.1:43802.service: Deactivated successfully. Jan 30 13:45:02.780233 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:45:02.780415 systemd[1]: session-7.scope: Consumed 3.638s CPU time, 159.2M memory peak, 0B memory swap peak. Jan 30 13:45:02.780895 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:45:02.781697 systemd-logind[1456]: Removed session 7. Jan 30 13:45:03.728820 kubelet[2467]: E0130 13:45:03.728773 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:03.729190 kubelet[2467]: E0130 13:45:03.728839 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:06.452851 kubelet[2467]: I0130 13:45:06.452817 2467 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:45:06.453238 containerd[1468]: time="2025-01-30T13:45:06.453169674Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:45:06.453486 kubelet[2467]: I0130 13:45:06.453329 2467 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:45:07.338566 systemd[1]: Created slice kubepods-besteffort-pod0e87f6b9_37f3_4344_8a5a_62df286469a3.slice - libcontainer container kubepods-besteffort-pod0e87f6b9_37f3_4344_8a5a_62df286469a3.slice. Jan 30 13:45:07.343756 kubelet[2467]: I0130 13:45:07.343718 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa5b56b5-996a-4c9f-9b62-ae6be74adf5b-xtables-lock\") pod \"kube-flannel-ds-8wr7h\" (UID: \"aa5b56b5-996a-4c9f-9b62-ae6be74adf5b\") " pod="kube-flannel/kube-flannel-ds-8wr7h" Jan 30 13:45:07.343756 kubelet[2467]: I0130 13:45:07.343746 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vctt\" (UniqueName: \"kubernetes.io/projected/0e87f6b9-37f3-4344-8a5a-62df286469a3-kube-api-access-4vctt\") pod \"kube-proxy-jkng9\" (UID: \"0e87f6b9-37f3-4344-8a5a-62df286469a3\") " pod="kube-system/kube-proxy-jkng9" Jan 30 13:45:07.343756 kubelet[2467]: I0130 13:45:07.343765 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq8dn\" (UniqueName: \"kubernetes.io/projected/aa5b56b5-996a-4c9f-9b62-ae6be74adf5b-kube-api-access-cq8dn\") pod \"kube-flannel-ds-8wr7h\" (UID: \"aa5b56b5-996a-4c9f-9b62-ae6be74adf5b\") " pod="kube-flannel/kube-flannel-ds-8wr7h" Jan 30 13:45:07.343756 kubelet[2467]: I0130 13:45:07.343780 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0e87f6b9-37f3-4344-8a5a-62df286469a3-kube-proxy\") pod \"kube-proxy-jkng9\" (UID: \"0e87f6b9-37f3-4344-8a5a-62df286469a3\") " pod="kube-system/kube-proxy-jkng9" Jan 30 13:45:07.343756 kubelet[2467]: I0130 13:45:07.343794 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e87f6b9-37f3-4344-8a5a-62df286469a3-lib-modules\") pod \"kube-proxy-jkng9\" (UID: \"0e87f6b9-37f3-4344-8a5a-62df286469a3\") " pod="kube-system/kube-proxy-jkng9" Jan 30 13:45:07.344210 kubelet[2467]: I0130 13:45:07.343807 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/aa5b56b5-996a-4c9f-9b62-ae6be74adf5b-run\") pod \"kube-flannel-ds-8wr7h\" (UID: \"aa5b56b5-996a-4c9f-9b62-ae6be74adf5b\") " pod="kube-flannel/kube-flannel-ds-8wr7h" Jan 30 13:45:07.344210 kubelet[2467]: I0130 13:45:07.343821 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/aa5b56b5-996a-4c9f-9b62-ae6be74adf5b-cni-plugin\") pod \"kube-flannel-ds-8wr7h\" (UID: \"aa5b56b5-996a-4c9f-9b62-ae6be74adf5b\") " pod="kube-flannel/kube-flannel-ds-8wr7h" Jan 30 13:45:07.344210 kubelet[2467]: I0130 13:45:07.343836 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e87f6b9-37f3-4344-8a5a-62df286469a3-xtables-lock\") pod \"kube-proxy-jkng9\" (UID: \"0e87f6b9-37f3-4344-8a5a-62df286469a3\") " pod="kube-system/kube-proxy-jkng9" Jan 30 13:45:07.344210 kubelet[2467]: I0130 13:45:07.343862 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/aa5b56b5-996a-4c9f-9b62-ae6be74adf5b-cni\") pod \"kube-flannel-ds-8wr7h\" (UID: \"aa5b56b5-996a-4c9f-9b62-ae6be74adf5b\") " pod="kube-flannel/kube-flannel-ds-8wr7h" Jan 30 13:45:07.344210 kubelet[2467]: I0130 13:45:07.343892 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/aa5b56b5-996a-4c9f-9b62-ae6be74adf5b-flannel-cfg\") pod \"kube-flannel-ds-8wr7h\" (UID: \"aa5b56b5-996a-4c9f-9b62-ae6be74adf5b\") " pod="kube-flannel/kube-flannel-ds-8wr7h" Jan 30 13:45:07.349759 systemd[1]: Created slice kubepods-burstable-podaa5b56b5_996a_4c9f_9b62_ae6be74adf5b.slice - libcontainer container kubepods-burstable-podaa5b56b5_996a_4c9f_9b62_ae6be74adf5b.slice. Jan 30 13:45:07.647939 kubelet[2467]: E0130 13:45:07.647817 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:07.648385 containerd[1468]: time="2025-01-30T13:45:07.648345914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jkng9,Uid:0e87f6b9-37f3-4344-8a5a-62df286469a3,Namespace:kube-system,Attempt:0,}" Jan 30 13:45:07.653058 kubelet[2467]: E0130 13:45:07.653017 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:07.653722 containerd[1468]: time="2025-01-30T13:45:07.653462718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-8wr7h,Uid:aa5b56b5-996a-4c9f-9b62-ae6be74adf5b,Namespace:kube-flannel,Attempt:0,}" Jan 30 13:45:07.680213 containerd[1468]: time="2025-01-30T13:45:07.679888615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:45:07.680213 containerd[1468]: time="2025-01-30T13:45:07.679945970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:45:07.680213 containerd[1468]: time="2025-01-30T13:45:07.679959521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:07.680213 containerd[1468]: time="2025-01-30T13:45:07.680057355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:07.688018 containerd[1468]: time="2025-01-30T13:45:07.687134275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:45:07.688018 containerd[1468]: time="2025-01-30T13:45:07.687937796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:45:07.688018 containerd[1468]: time="2025-01-30T13:45:07.687980740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:07.689556 containerd[1468]: time="2025-01-30T13:45:07.688086362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:07.698896 systemd[1]: Started cri-containerd-c6bd16374f5f82156616743acf86fb439acbbc57f6d85bbdc6b03aa116217910.scope - libcontainer container c6bd16374f5f82156616743acf86fb439acbbc57f6d85bbdc6b03aa116217910. Jan 30 13:45:07.706743 systemd[1]: Started cri-containerd-039abcd42a4366e07734463757cfb7e347dec3af24959129e0c23ff66dfbe78f.scope - libcontainer container 039abcd42a4366e07734463757cfb7e347dec3af24959129e0c23ff66dfbe78f. Jan 30 13:45:07.723347 containerd[1468]: time="2025-01-30T13:45:07.723307846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jkng9,Uid:0e87f6b9-37f3-4344-8a5a-62df286469a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6bd16374f5f82156616743acf86fb439acbbc57f6d85bbdc6b03aa116217910\"" Jan 30 13:45:07.724330 kubelet[2467]: E0130 13:45:07.724294 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:07.728425 containerd[1468]: time="2025-01-30T13:45:07.728389343Z" level=info msg="CreateContainer within sandbox \"c6bd16374f5f82156616743acf86fb439acbbc57f6d85bbdc6b03aa116217910\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:45:07.749222 containerd[1468]: time="2025-01-30T13:45:07.749193128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-8wr7h,Uid:aa5b56b5-996a-4c9f-9b62-ae6be74adf5b,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"039abcd42a4366e07734463757cfb7e347dec3af24959129e0c23ff66dfbe78f\"" Jan 30 13:45:07.749663 kubelet[2467]: E0130 13:45:07.749626 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:07.751022 containerd[1468]: time="2025-01-30T13:45:07.750788193Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 30 13:45:07.752221 containerd[1468]: time="2025-01-30T13:45:07.752166475Z" level=info msg="CreateContainer within sandbox \"c6bd16374f5f82156616743acf86fb439acbbc57f6d85bbdc6b03aa116217910\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe0de793b1f11d561ee0a600b73929040093626ff41c59b71503c36bc5595cc6\"" Jan 30 13:45:07.752717 containerd[1468]: time="2025-01-30T13:45:07.752627585Z" level=info msg="StartContainer for \"fe0de793b1f11d561ee0a600b73929040093626ff41c59b71503c36bc5595cc6\"" Jan 30 13:45:07.788812 systemd[1]: Started cri-containerd-fe0de793b1f11d561ee0a600b73929040093626ff41c59b71503c36bc5595cc6.scope - libcontainer container fe0de793b1f11d561ee0a600b73929040093626ff41c59b71503c36bc5595cc6. Jan 30 13:45:07.816332 containerd[1468]: time="2025-01-30T13:45:07.816292969Z" level=info msg="StartContainer for \"fe0de793b1f11d561ee0a600b73929040093626ff41c59b71503c36bc5595cc6\" returns successfully" Jan 30 13:45:08.684262 kubelet[2467]: E0130 13:45:08.684209 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:08.736541 kubelet[2467]: E0130 13:45:08.736504 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:08.737535 kubelet[2467]: E0130 13:45:08.737491 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:08.747130 kubelet[2467]: I0130 13:45:08.747059 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jkng9" podStartSLOduration=1.747039666 podStartE2EDuration="1.747039666s" podCreationTimestamp="2025-01-30 13:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:45:08.746771343 +0000 UTC m=+7.102838046" watchObservedRunningTime="2025-01-30 13:45:08.747039666 +0000 UTC m=+7.103106359" Jan 30 13:45:09.474578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3337641115.mount: Deactivated successfully. Jan 30 13:45:09.509727 containerd[1468]: time="2025-01-30T13:45:09.509638408Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:09.512480 containerd[1468]: time="2025-01-30T13:45:09.512404001Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" Jan 30 13:45:09.513449 containerd[1468]: time="2025-01-30T13:45:09.513414664Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:09.515626 containerd[1468]: time="2025-01-30T13:45:09.515552301Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:09.516336 containerd[1468]: time="2025-01-30T13:45:09.516308904Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.76549262s" Jan 30 13:45:09.516393 containerd[1468]: time="2025-01-30T13:45:09.516337285Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 30 13:45:09.518456 containerd[1468]: time="2025-01-30T13:45:09.518401393Z" level=info msg="CreateContainer within sandbox \"039abcd42a4366e07734463757cfb7e347dec3af24959129e0c23ff66dfbe78f\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 30 13:45:09.531545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1658867727.mount: Deactivated successfully. Jan 30 13:45:09.532527 containerd[1468]: time="2025-01-30T13:45:09.532480890Z" level=info msg="CreateContainer within sandbox \"039abcd42a4366e07734463757cfb7e347dec3af24959129e0c23ff66dfbe78f\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"14fd9e075ac8fad9c93fae4f6f560f136664e68187eeacfd3536e3566b4d941d\"" Jan 30 13:45:09.532948 containerd[1468]: time="2025-01-30T13:45:09.532918405Z" level=info msg="StartContainer for \"14fd9e075ac8fad9c93fae4f6f560f136664e68187eeacfd3536e3566b4d941d\"" Jan 30 13:45:09.565818 systemd[1]: Started cri-containerd-14fd9e075ac8fad9c93fae4f6f560f136664e68187eeacfd3536e3566b4d941d.scope - libcontainer container 14fd9e075ac8fad9c93fae4f6f560f136664e68187eeacfd3536e3566b4d941d. Jan 30 13:45:09.590884 systemd[1]: cri-containerd-14fd9e075ac8fad9c93fae4f6f560f136664e68187eeacfd3536e3566b4d941d.scope: Deactivated successfully. Jan 30 13:45:09.603834 containerd[1468]: time="2025-01-30T13:45:09.603788839Z" level=info msg="StartContainer for \"14fd9e075ac8fad9c93fae4f6f560f136664e68187eeacfd3536e3566b4d941d\" returns successfully" Jan 30 13:45:09.644469 containerd[1468]: time="2025-01-30T13:45:09.644402058Z" level=info msg="shim disconnected" id=14fd9e075ac8fad9c93fae4f6f560f136664e68187eeacfd3536e3566b4d941d namespace=k8s.io Jan 30 13:45:09.644469 containerd[1468]: time="2025-01-30T13:45:09.644456386Z" level=warning msg="cleaning up after shim disconnected" id=14fd9e075ac8fad9c93fae4f6f560f136664e68187eeacfd3536e3566b4d941d namespace=k8s.io Jan 30 13:45:09.644469 containerd[1468]: time="2025-01-30T13:45:09.644465054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:45:09.740332 kubelet[2467]: E0130 13:45:09.740209 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:09.740332 kubelet[2467]: E0130 13:45:09.740245 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:09.741102 containerd[1468]: time="2025-01-30T13:45:09.740906853Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 30 13:45:10.282922 kubelet[2467]: E0130 13:45:10.282887 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:10.474618 systemd[1]: run-containerd-runc-k8s.io-14fd9e075ac8fad9c93fae4f6f560f136664e68187eeacfd3536e3566b4d941d-runc.fmb35a.mount: Deactivated successfully. Jan 30 13:45:10.474731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14fd9e075ac8fad9c93fae4f6f560f136664e68187eeacfd3536e3566b4d941d-rootfs.mount: Deactivated successfully. Jan 30 13:45:10.741886 kubelet[2467]: E0130 13:45:10.741847 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:11.528482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965529882.mount: Deactivated successfully. Jan 30 13:45:12.066941 containerd[1468]: time="2025-01-30T13:45:12.066900353Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:12.067657 containerd[1468]: time="2025-01-30T13:45:12.067591556Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866357" Jan 30 13:45:12.068963 containerd[1468]: time="2025-01-30T13:45:12.068918558Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:12.082358 containerd[1468]: time="2025-01-30T13:45:12.082271248Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:12.083485 containerd[1468]: time="2025-01-30T13:45:12.083427198Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.342472542s" Jan 30 13:45:12.083485 containerd[1468]: time="2025-01-30T13:45:12.083484590Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 30 13:45:12.086592 containerd[1468]: time="2025-01-30T13:45:12.086558261Z" level=info msg="CreateContainer within sandbox \"039abcd42a4366e07734463757cfb7e347dec3af24959129e0c23ff66dfbe78f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:45:12.317980 containerd[1468]: time="2025-01-30T13:45:12.317859597Z" level=info msg="CreateContainer within sandbox \"039abcd42a4366e07734463757cfb7e347dec3af24959129e0c23ff66dfbe78f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"01defc6849d7c0dd5accafbcdbfbeb3edee51be0b5be0619bcc0ec51eaa857bf\"" Jan 30 13:45:12.318917 containerd[1468]: time="2025-01-30T13:45:12.318734881Z" level=info msg="StartContainer for \"01defc6849d7c0dd5accafbcdbfbeb3edee51be0b5be0619bcc0ec51eaa857bf\"" Jan 30 13:45:12.350817 systemd[1]: Started cri-containerd-01defc6849d7c0dd5accafbcdbfbeb3edee51be0b5be0619bcc0ec51eaa857bf.scope - libcontainer container 01defc6849d7c0dd5accafbcdbfbeb3edee51be0b5be0619bcc0ec51eaa857bf. Jan 30 13:45:12.372662 systemd[1]: cri-containerd-01defc6849d7c0dd5accafbcdbfbeb3edee51be0b5be0619bcc0ec51eaa857bf.scope: Deactivated successfully. Jan 30 13:45:12.381390 kubelet[2467]: I0130 13:45:12.381356 2467 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:45:12.391177 containerd[1468]: time="2025-01-30T13:45:12.391132339Z" level=info msg="StartContainer for \"01defc6849d7c0dd5accafbcdbfbeb3edee51be0b5be0619bcc0ec51eaa857bf\" returns successfully" Jan 30 13:45:12.417378 systemd[1]: Created slice kubepods-burstable-pod1b0041a6_7b07_4b41_a769_12b880cf4a99.slice - libcontainer container kubepods-burstable-pod1b0041a6_7b07_4b41_a769_12b880cf4a99.slice. Jan 30 13:45:12.418698 containerd[1468]: time="2025-01-30T13:45:12.418346627Z" level=info msg="shim disconnected" id=01defc6849d7c0dd5accafbcdbfbeb3edee51be0b5be0619bcc0ec51eaa857bf namespace=k8s.io Jan 30 13:45:12.418698 containerd[1468]: time="2025-01-30T13:45:12.418413209Z" level=warning msg="cleaning up after shim disconnected" id=01defc6849d7c0dd5accafbcdbfbeb3edee51be0b5be0619bcc0ec51eaa857bf namespace=k8s.io Jan 30 13:45:12.418698 containerd[1468]: time="2025-01-30T13:45:12.418421697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:45:12.422956 systemd[1]: Created slice kubepods-burstable-pod42ebc054_39b1_437c_a611_16b58c046b2d.slice - libcontainer container kubepods-burstable-pod42ebc054_39b1_437c_a611_16b58c046b2d.slice. Jan 30 13:45:12.476523 kubelet[2467]: I0130 13:45:12.476475 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b0041a6-7b07-4b41-a769-12b880cf4a99-config-volume\") pod \"coredns-6f6b679f8f-q55fx\" (UID: \"1b0041a6-7b07-4b41-a769-12b880cf4a99\") " pod="kube-system/coredns-6f6b679f8f-q55fx" Jan 30 13:45:12.476523 kubelet[2467]: I0130 13:45:12.476520 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlpsg\" (UniqueName: \"kubernetes.io/projected/42ebc054-39b1-437c-a611-16b58c046b2d-kube-api-access-jlpsg\") pod \"coredns-6f6b679f8f-tkttg\" (UID: \"42ebc054-39b1-437c-a611-16b58c046b2d\") " pod="kube-system/coredns-6f6b679f8f-tkttg" Jan 30 13:45:12.476523 kubelet[2467]: I0130 13:45:12.476542 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h96pg\" (UniqueName: \"kubernetes.io/projected/1b0041a6-7b07-4b41-a769-12b880cf4a99-kube-api-access-h96pg\") pod \"coredns-6f6b679f8f-q55fx\" (UID: \"1b0041a6-7b07-4b41-a769-12b880cf4a99\") " pod="kube-system/coredns-6f6b679f8f-q55fx" Jan 30 13:45:12.476794 kubelet[2467]: I0130 13:45:12.476556 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42ebc054-39b1-437c-a611-16b58c046b2d-config-volume\") pod \"coredns-6f6b679f8f-tkttg\" (UID: \"42ebc054-39b1-437c-a611-16b58c046b2d\") " pod="kube-system/coredns-6f6b679f8f-tkttg" Jan 30 13:45:12.528443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01defc6849d7c0dd5accafbcdbfbeb3edee51be0b5be0619bcc0ec51eaa857bf-rootfs.mount: Deactivated successfully. Jan 30 13:45:12.721179 kubelet[2467]: E0130 13:45:12.721032 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:12.721784 containerd[1468]: time="2025-01-30T13:45:12.721647769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q55fx,Uid:1b0041a6-7b07-4b41-a769-12b880cf4a99,Namespace:kube-system,Attempt:0,}" Jan 30 13:45:12.727087 kubelet[2467]: E0130 13:45:12.727054 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:12.727630 containerd[1468]: time="2025-01-30T13:45:12.727571389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tkttg,Uid:42ebc054-39b1-437c-a611-16b58c046b2d,Namespace:kube-system,Attempt:0,}" Jan 30 13:45:12.745802 kubelet[2467]: E0130 13:45:12.745770 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:12.747704 containerd[1468]: time="2025-01-30T13:45:12.747658417Z" level=info msg="CreateContainer within sandbox \"039abcd42a4366e07734463757cfb7e347dec3af24959129e0c23ff66dfbe78f\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 30 13:45:12.815259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2359419336.mount: Deactivated successfully. Jan 30 13:45:12.822791 containerd[1468]: time="2025-01-30T13:45:12.822757418Z" level=info msg="CreateContainer within sandbox \"039abcd42a4366e07734463757cfb7e347dec3af24959129e0c23ff66dfbe78f\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e494403051cecb69c824615e0ff77aa0d0e60c8db50869f7afb331d1191b505b\"" Jan 30 13:45:12.823951 containerd[1468]: time="2025-01-30T13:45:12.823874125Z" level=info msg="StartContainer for \"e494403051cecb69c824615e0ff77aa0d0e60c8db50869f7afb331d1191b505b\"" Jan 30 13:45:12.839088 containerd[1468]: time="2025-01-30T13:45:12.838980430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tkttg,Uid:42ebc054-39b1-437c-a611-16b58c046b2d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce139c9101c51e1aedb6954ab67cc8be56f033c115874e9d477cab9abc8ed82d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 13:45:12.839304 kubelet[2467]: E0130 13:45:12.839261 2467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce139c9101c51e1aedb6954ab67cc8be56f033c115874e9d477cab9abc8ed82d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 13:45:12.839408 kubelet[2467]: E0130 13:45:12.839347 2467 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce139c9101c51e1aedb6954ab67cc8be56f033c115874e9d477cab9abc8ed82d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-tkttg" Jan 30 13:45:12.839408 kubelet[2467]: E0130 13:45:12.839381 2467 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce139c9101c51e1aedb6954ab67cc8be56f033c115874e9d477cab9abc8ed82d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-tkttg" Jan 30 13:45:12.839505 kubelet[2467]: E0130 13:45:12.839427 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-tkttg_kube-system(42ebc054-39b1-437c-a611-16b58c046b2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-tkttg_kube-system(42ebc054-39b1-437c-a611-16b58c046b2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce139c9101c51e1aedb6954ab67cc8be56f033c115874e9d477cab9abc8ed82d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-tkttg" podUID="42ebc054-39b1-437c-a611-16b58c046b2d" Jan 30 13:45:12.840999 containerd[1468]: time="2025-01-30T13:45:12.840959613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q55fx,Uid:1b0041a6-7b07-4b41-a769-12b880cf4a99,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e7b0f49a6850958f10f170693ba149d40350133d398037d959d77e9d26453a1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 13:45:12.841277 kubelet[2467]: E0130 13:45:12.841248 2467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e7b0f49a6850958f10f170693ba149d40350133d398037d959d77e9d26453a1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 13:45:12.841330 kubelet[2467]: E0130 13:45:12.841285 2467 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e7b0f49a6850958f10f170693ba149d40350133d398037d959d77e9d26453a1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-q55fx" Jan 30 13:45:12.841330 kubelet[2467]: E0130 13:45:12.841303 2467 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e7b0f49a6850958f10f170693ba149d40350133d398037d959d77e9d26453a1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-q55fx" Jan 30 13:45:12.841385 kubelet[2467]: E0130 13:45:12.841336 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-q55fx_kube-system(1b0041a6-7b07-4b41-a769-12b880cf4a99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-q55fx_kube-system(1b0041a6-7b07-4b41-a769-12b880cf4a99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e7b0f49a6850958f10f170693ba149d40350133d398037d959d77e9d26453a1\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-q55fx" podUID="1b0041a6-7b07-4b41-a769-12b880cf4a99" Jan 30 13:45:12.854788 systemd[1]: Started cri-containerd-e494403051cecb69c824615e0ff77aa0d0e60c8db50869f7afb331d1191b505b.scope - libcontainer container e494403051cecb69c824615e0ff77aa0d0e60c8db50869f7afb331d1191b505b. Jan 30 13:45:12.882981 containerd[1468]: time="2025-01-30T13:45:12.882879479Z" level=info msg="StartContainer for \"e494403051cecb69c824615e0ff77aa0d0e60c8db50869f7afb331d1191b505b\" returns successfully" Jan 30 13:45:12.973197 update_engine[1460]: I20250130 13:45:12.973052 1460 update_attempter.cc:509] Updating boot flags... Jan 30 13:45:12.976372 kubelet[2467]: E0130 13:45:12.976331 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:13.022761 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3027) Jan 30 13:45:13.062702 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3025) Jan 30 13:45:13.097477 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3025) Jan 30 13:45:13.530271 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e7b0f49a6850958f10f170693ba149d40350133d398037d959d77e9d26453a1-shm.mount: Deactivated successfully. Jan 30 13:45:13.749457 kubelet[2467]: E0130 13:45:13.749421 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:13.758386 kubelet[2467]: I0130 13:45:13.758306 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-8wr7h" podStartSLOduration=2.424028165 podStartE2EDuration="6.758275807s" podCreationTimestamp="2025-01-30 13:45:07 +0000 UTC" firstStartedPulling="2025-01-30 13:45:07.750244613 +0000 UTC m=+6.106311306" lastFinishedPulling="2025-01-30 13:45:12.084492254 +0000 UTC m=+10.440558948" observedRunningTime="2025-01-30 13:45:13.758094375 +0000 UTC m=+12.114161078" watchObservedRunningTime="2025-01-30 13:45:13.758275807 +0000 UTC m=+12.114342500" Jan 30 13:45:13.923964 systemd-networkd[1405]: flannel.1: Link UP Jan 30 13:45:13.923978 systemd-networkd[1405]: flannel.1: Gained carrier Jan 30 13:45:14.751321 kubelet[2467]: E0130 13:45:14.751285 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:15.835823 systemd-networkd[1405]: flannel.1: Gained IPv6LL Jan 30 13:45:24.718020 kubelet[2467]: E0130 13:45:24.717972 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:24.718488 containerd[1468]: time="2025-01-30T13:45:24.718377405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tkttg,Uid:42ebc054-39b1-437c-a611-16b58c046b2d,Namespace:kube-system,Attempt:0,}" Jan 30 13:45:24.739893 systemd-networkd[1405]: cni0: Link UP Jan 30 13:45:24.739903 systemd-networkd[1405]: cni0: Gained carrier Jan 30 13:45:24.743556 systemd-networkd[1405]: cni0: Lost carrier Jan 30 13:45:24.747289 systemd-networkd[1405]: veth8724f09b: Link UP Jan 30 13:45:24.749132 kernel: cni0: port 1(veth8724f09b) entered blocking state Jan 30 13:45:24.749199 kernel: cni0: port 1(veth8724f09b) entered disabled state Jan 30 13:45:24.749862 kernel: veth8724f09b: entered allmulticast mode Jan 30 13:45:24.750695 kernel: veth8724f09b: entered promiscuous mode Jan 30 13:45:24.752298 kernel: cni0: port 1(veth8724f09b) entered blocking state Jan 30 13:45:24.752348 kernel: cni0: port 1(veth8724f09b) entered forwarding state Jan 30 13:45:24.752384 kernel: cni0: port 1(veth8724f09b) entered disabled state Jan 30 13:45:24.759994 kernel: cni0: port 1(veth8724f09b) entered blocking state Jan 30 13:45:24.760167 kernel: cni0: port 1(veth8724f09b) entered forwarding state Jan 30 13:45:24.760132 systemd-networkd[1405]: veth8724f09b: Gained carrier Jan 30 13:45:24.760916 systemd-networkd[1405]: cni0: Gained carrier Jan 30 13:45:24.763051 containerd[1468]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} Jan 30 13:45:24.763051 containerd[1468]: delegateAdd: netconf sent to delegate plugin: Jan 30 13:45:24.782464 containerd[1468]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T13:45:24.781800616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:45:24.782464 containerd[1468]: time="2025-01-30T13:45:24.782425095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:45:24.782464 containerd[1468]: time="2025-01-30T13:45:24.782438763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:24.782658 containerd[1468]: time="2025-01-30T13:45:24.782506769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:24.808799 systemd[1]: Started cri-containerd-d02e00318f605cc1abb56ae2525a872e4b275982b8fff6111f05b82b4fda37dd.scope - libcontainer container d02e00318f605cc1abb56ae2525a872e4b275982b8fff6111f05b82b4fda37dd. Jan 30 13:45:24.821615 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:45:24.846500 containerd[1468]: time="2025-01-30T13:45:24.846458606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tkttg,Uid:42ebc054-39b1-437c-a611-16b58c046b2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d02e00318f605cc1abb56ae2525a872e4b275982b8fff6111f05b82b4fda37dd\"" Jan 30 13:45:24.847497 kubelet[2467]: E0130 13:45:24.847431 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:24.849431 containerd[1468]: time="2025-01-30T13:45:24.849393742Z" level=info msg="CreateContainer within sandbox \"d02e00318f605cc1abb56ae2525a872e4b275982b8fff6111f05b82b4fda37dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:45:24.865111 containerd[1468]: time="2025-01-30T13:45:24.865055385Z" level=info msg="CreateContainer within sandbox \"d02e00318f605cc1abb56ae2525a872e4b275982b8fff6111f05b82b4fda37dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e33f8b8a36ebc3c126464f205237d5d16de20ec4ad958b94dfafa4fb82bf6fb0\"" Jan 30 13:45:24.865524 containerd[1468]: time="2025-01-30T13:45:24.865482015Z" level=info msg="StartContainer for \"e33f8b8a36ebc3c126464f205237d5d16de20ec4ad958b94dfafa4fb82bf6fb0\"" Jan 30 13:45:24.893800 systemd[1]: Started cri-containerd-e33f8b8a36ebc3c126464f205237d5d16de20ec4ad958b94dfafa4fb82bf6fb0.scope - libcontainer container e33f8b8a36ebc3c126464f205237d5d16de20ec4ad958b94dfafa4fb82bf6fb0. Jan 30 13:45:24.921045 containerd[1468]: time="2025-01-30T13:45:24.921007139Z" level=info msg="StartContainer for \"e33f8b8a36ebc3c126464f205237d5d16de20ec4ad958b94dfafa4fb82bf6fb0\" returns successfully" Jan 30 13:45:25.768863 kubelet[2467]: E0130 13:45:25.768802 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:25.778495 kubelet[2467]: I0130 13:45:25.778013 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-tkttg" podStartSLOduration=18.777704477 podStartE2EDuration="18.777704477s" podCreationTimestamp="2025-01-30 13:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:45:25.7773625 +0000 UTC m=+24.133429193" watchObservedRunningTime="2025-01-30 13:45:25.777704477 +0000 UTC m=+24.133771170" Jan 30 13:45:26.011819 systemd-networkd[1405]: veth8724f09b: Gained IPv6LL Jan 30 13:45:26.715806 systemd-networkd[1405]: cni0: Gained IPv6LL Jan 30 13:45:26.770002 kubelet[2467]: E0130 13:45:26.769963 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:27.717574 kubelet[2467]: E0130 13:45:27.717484 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:27.718007 containerd[1468]: time="2025-01-30T13:45:27.717911691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q55fx,Uid:1b0041a6-7b07-4b41-a769-12b880cf4a99,Namespace:kube-system,Attempt:0,}" Jan 30 13:45:27.736069 systemd-networkd[1405]: veth7f1fa9f4: Link UP Jan 30 13:45:27.737877 kernel: cni0: port 2(veth7f1fa9f4) entered blocking state Jan 30 13:45:27.737923 kernel: cni0: port 2(veth7f1fa9f4) entered disabled state Jan 30 13:45:27.737939 kernel: veth7f1fa9f4: entered allmulticast mode Jan 30 13:45:27.739232 kernel: veth7f1fa9f4: entered promiscuous mode Jan 30 13:45:27.747530 kernel: cni0: port 2(veth7f1fa9f4) entered blocking state Jan 30 13:45:27.747700 kernel: cni0: port 2(veth7f1fa9f4) entered forwarding state Jan 30 13:45:27.746973 systemd-networkd[1405]: veth7f1fa9f4: Gained carrier Jan 30 13:45:27.749289 containerd[1468]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c938), "name":"cbr0", "type":"bridge"} Jan 30 13:45:27.749289 containerd[1468]: delegateAdd: netconf sent to delegate plugin: Jan 30 13:45:27.771554 kubelet[2467]: E0130 13:45:27.771520 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:27.772044 containerd[1468]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T13:45:27.771209968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:45:27.772044 containerd[1468]: time="2025-01-30T13:45:27.771261200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:45:27.772044 containerd[1468]: time="2025-01-30T13:45:27.771274376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:27.772044 containerd[1468]: time="2025-01-30T13:45:27.771354086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:27.791815 systemd[1]: Started cri-containerd-ae56dd5bd1c25414ba65677fee17e59bdd3d1d792937989ad3388ceb57eaf93b.scope - libcontainer container ae56dd5bd1c25414ba65677fee17e59bdd3d1d792937989ad3388ceb57eaf93b. Jan 30 13:45:27.802246 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:45:27.823280 containerd[1468]: time="2025-01-30T13:45:27.823246311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q55fx,Uid:1b0041a6-7b07-4b41-a769-12b880cf4a99,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae56dd5bd1c25414ba65677fee17e59bdd3d1d792937989ad3388ceb57eaf93b\"" Jan 30 13:45:27.823966 kubelet[2467]: E0130 13:45:27.823938 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:27.825882 containerd[1468]: time="2025-01-30T13:45:27.825832422Z" level=info msg="CreateContainer within sandbox \"ae56dd5bd1c25414ba65677fee17e59bdd3d1d792937989ad3388ceb57eaf93b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:45:27.839587 containerd[1468]: time="2025-01-30T13:45:27.839545145Z" level=info msg="CreateContainer within sandbox \"ae56dd5bd1c25414ba65677fee17e59bdd3d1d792937989ad3388ceb57eaf93b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dee400600f076aaab8d7b8861d20780c39e3004ce01edf3071f8c31f2b4e007c\"" Jan 30 13:45:27.840098 containerd[1468]: time="2025-01-30T13:45:27.840029633Z" level=info msg="StartContainer for \"dee400600f076aaab8d7b8861d20780c39e3004ce01edf3071f8c31f2b4e007c\"" Jan 30 13:45:27.865819 systemd[1]: Started cri-containerd-dee400600f076aaab8d7b8861d20780c39e3004ce01edf3071f8c31f2b4e007c.scope - libcontainer container dee400600f076aaab8d7b8861d20780c39e3004ce01edf3071f8c31f2b4e007c. Jan 30 13:45:27.889453 containerd[1468]: time="2025-01-30T13:45:27.889411428Z" level=info msg="StartContainer for \"dee400600f076aaab8d7b8861d20780c39e3004ce01edf3071f8c31f2b4e007c\" returns successfully" Jan 30 13:45:28.774993 kubelet[2467]: E0130 13:45:28.774963 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:28.784692 kubelet[2467]: I0130 13:45:28.784582 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-q55fx" podStartSLOduration=21.784568068 podStartE2EDuration="21.784568068s" podCreationTimestamp="2025-01-30 13:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:45:28.783963431 +0000 UTC m=+27.140030124" watchObservedRunningTime="2025-01-30 13:45:28.784568068 +0000 UTC m=+27.140634761" Jan 30 13:45:29.083802 systemd-networkd[1405]: veth7f1fa9f4: Gained IPv6LL Jan 30 13:45:29.574279 systemd[1]: Started sshd@7-10.0.0.96:22-10.0.0.1:41218.service - OpenSSH per-connection server daemon (10.0.0.1:41218). Jan 30 13:45:29.606829 sshd[3421]: Accepted publickey for core from 10.0.0.1 port 41218 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:29.608542 sshd[3421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:29.612155 systemd-logind[1456]: New session 8 of user core. Jan 30 13:45:29.621784 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:45:29.732524 sshd[3421]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:29.736751 systemd[1]: sshd@7-10.0.0.96:22-10.0.0.1:41218.service: Deactivated successfully. Jan 30 13:45:29.738997 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:45:29.739602 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:45:29.740402 systemd-logind[1456]: Removed session 8. Jan 30 13:45:29.776074 kubelet[2467]: E0130 13:45:29.776046 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:30.777807 kubelet[2467]: E0130 13:45:30.777759 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:34.749217 systemd[1]: Started sshd@8-10.0.0.96:22-10.0.0.1:39172.service - OpenSSH per-connection server daemon (10.0.0.1:39172). Jan 30 13:45:34.780240 sshd[3461]: Accepted publickey for core from 10.0.0.1 port 39172 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:34.781554 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:34.785346 systemd-logind[1456]: New session 9 of user core. Jan 30 13:45:34.797799 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:45:34.908636 sshd[3461]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:34.912947 systemd[1]: sshd@8-10.0.0.96:22-10.0.0.1:39172.service: Deactivated successfully. Jan 30 13:45:34.915089 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:45:34.915761 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:45:34.916564 systemd-logind[1456]: Removed session 9. Jan 30 13:45:39.921566 systemd[1]: Started sshd@9-10.0.0.96:22-10.0.0.1:39176.service - OpenSSH per-connection server daemon (10.0.0.1:39176). Jan 30 13:45:39.952813 sshd[3499]: Accepted publickey for core from 10.0.0.1 port 39176 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:39.954284 sshd[3499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:39.958010 systemd-logind[1456]: New session 10 of user core. Jan 30 13:45:39.964784 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:45:40.064505 sshd[3499]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:40.076269 systemd[1]: sshd@9-10.0.0.96:22-10.0.0.1:39176.service: Deactivated successfully. Jan 30 13:45:40.077940 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:45:40.079227 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:45:40.089892 systemd[1]: Started sshd@10-10.0.0.96:22-10.0.0.1:39188.service - OpenSSH per-connection server daemon (10.0.0.1:39188). Jan 30 13:45:40.090751 systemd-logind[1456]: Removed session 10. Jan 30 13:45:40.116752 sshd[3515]: Accepted publickey for core from 10.0.0.1 port 39188 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:40.118163 sshd[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:40.121732 systemd-logind[1456]: New session 11 of user core. Jan 30 13:45:40.128777 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:45:40.254502 sshd[3515]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:40.269134 systemd[1]: sshd@10-10.0.0.96:22-10.0.0.1:39188.service: Deactivated successfully. Jan 30 13:45:40.271633 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:45:40.273143 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:45:40.278948 systemd[1]: Started sshd@11-10.0.0.96:22-10.0.0.1:39192.service - OpenSSH per-connection server daemon (10.0.0.1:39192). Jan 30 13:45:40.279918 systemd-logind[1456]: Removed session 11. Jan 30 13:45:40.305956 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 39192 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:40.307284 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:40.311141 systemd-logind[1456]: New session 12 of user core. Jan 30 13:45:40.318832 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:45:40.421034 sshd[3527]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:40.425468 systemd[1]: sshd@11-10.0.0.96:22-10.0.0.1:39192.service: Deactivated successfully. Jan 30 13:45:40.427660 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:45:40.428342 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:45:40.429223 systemd-logind[1456]: Removed session 12. Jan 30 13:45:45.433726 systemd[1]: Started sshd@12-10.0.0.96:22-10.0.0.1:38422.service - OpenSSH per-connection server daemon (10.0.0.1:38422). Jan 30 13:45:45.465239 sshd[3562]: Accepted publickey for core from 10.0.0.1 port 38422 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:45.466654 sshd[3562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:45.470522 systemd-logind[1456]: New session 13 of user core. Jan 30 13:45:45.480817 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:45:45.583874 sshd[3562]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:45.596509 systemd[1]: sshd@12-10.0.0.96:22-10.0.0.1:38422.service: Deactivated successfully. Jan 30 13:45:45.598567 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:45:45.600146 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:45:45.605906 systemd[1]: Started sshd@13-10.0.0.96:22-10.0.0.1:38424.service - OpenSSH per-connection server daemon (10.0.0.1:38424). Jan 30 13:45:45.606849 systemd-logind[1456]: Removed session 13. Jan 30 13:45:45.634933 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 38424 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:45.636485 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:45.640319 systemd-logind[1456]: New session 14 of user core. Jan 30 13:45:45.648780 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:45:45.801882 sshd[3577]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:45.812376 systemd[1]: sshd@13-10.0.0.96:22-10.0.0.1:38424.service: Deactivated successfully. Jan 30 13:45:45.814035 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:45:45.815722 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:45:45.821931 systemd[1]: Started sshd@14-10.0.0.96:22-10.0.0.1:38430.service - OpenSSH per-connection server daemon (10.0.0.1:38430). Jan 30 13:45:45.822813 systemd-logind[1456]: Removed session 14. Jan 30 13:45:45.849689 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 38430 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:45.851342 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:45.855212 systemd-logind[1456]: New session 15 of user core. Jan 30 13:45:45.865808 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:45:47.133703 sshd[3589]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:47.142030 systemd[1]: sshd@14-10.0.0.96:22-10.0.0.1:38430.service: Deactivated successfully. Jan 30 13:45:47.144240 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:45:47.145543 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:45:47.157966 systemd[1]: Started sshd@15-10.0.0.96:22-10.0.0.1:38446.service - OpenSSH per-connection server daemon (10.0.0.1:38446). Jan 30 13:45:47.159210 systemd-logind[1456]: Removed session 15. Jan 30 13:45:47.187650 sshd[3608]: Accepted publickey for core from 10.0.0.1 port 38446 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:47.189156 sshd[3608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:47.193062 systemd-logind[1456]: New session 16 of user core. Jan 30 13:45:47.202789 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:45:47.455758 sshd[3608]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:47.468766 systemd[1]: sshd@15-10.0.0.96:22-10.0.0.1:38446.service: Deactivated successfully. Jan 30 13:45:47.470642 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:45:47.471353 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:45:47.477891 systemd[1]: Started sshd@16-10.0.0.96:22-10.0.0.1:38460.service - OpenSSH per-connection server daemon (10.0.0.1:38460). Jan 30 13:45:47.478431 systemd-logind[1456]: Removed session 16. Jan 30 13:45:47.505115 sshd[3622]: Accepted publickey for core from 10.0.0.1 port 38460 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:47.506770 sshd[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:47.510899 systemd-logind[1456]: New session 17 of user core. Jan 30 13:45:47.519827 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:45:47.620932 sshd[3622]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:47.625440 systemd[1]: sshd@16-10.0.0.96:22-10.0.0.1:38460.service: Deactivated successfully. Jan 30 13:45:47.627728 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:45:47.628325 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:45:47.629216 systemd-logind[1456]: Removed session 17. Jan 30 13:45:52.637405 systemd[1]: Started sshd@17-10.0.0.96:22-10.0.0.1:34220.service - OpenSSH per-connection server daemon (10.0.0.1:34220). Jan 30 13:45:52.668630 sshd[3657]: Accepted publickey for core from 10.0.0.1 port 34220 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:52.670050 sshd[3657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:52.674059 systemd-logind[1456]: New session 18 of user core. Jan 30 13:45:52.693832 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:45:52.796429 sshd[3657]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:52.800254 systemd[1]: sshd@17-10.0.0.96:22-10.0.0.1:34220.service: Deactivated successfully. Jan 30 13:45:52.802781 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:45:52.803506 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:45:52.804501 systemd-logind[1456]: Removed session 18. Jan 30 13:45:57.807911 systemd[1]: Started sshd@18-10.0.0.96:22-10.0.0.1:34224.service - OpenSSH per-connection server daemon (10.0.0.1:34224). Jan 30 13:45:57.839043 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 34224 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:57.840549 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:57.844394 systemd-logind[1456]: New session 19 of user core. Jan 30 13:45:57.851809 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:45:57.954354 sshd[3695]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:57.958812 systemd[1]: sshd@18-10.0.0.96:22-10.0.0.1:34224.service: Deactivated successfully. Jan 30 13:45:57.960824 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:45:57.961403 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:45:57.962238 systemd-logind[1456]: Removed session 19. Jan 30 13:46:02.967241 systemd[1]: Started sshd@19-10.0.0.96:22-10.0.0.1:35576.service - OpenSSH per-connection server daemon (10.0.0.1:35576). Jan 30 13:46:03.002119 sshd[3732]: Accepted publickey for core from 10.0.0.1 port 35576 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:03.003739 sshd[3732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:03.008127 systemd-logind[1456]: New session 20 of user core. Jan 30 13:46:03.021796 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:46:03.125864 sshd[3732]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:03.129306 systemd[1]: sshd@19-10.0.0.96:22-10.0.0.1:35576.service: Deactivated successfully. Jan 30 13:46:03.131238 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:46:03.131992 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:46:03.132889 systemd-logind[1456]: Removed session 20. Jan 30 13:46:08.136218 systemd[1]: Started sshd@20-10.0.0.96:22-10.0.0.1:35588.service - OpenSSH per-connection server daemon (10.0.0.1:35588). Jan 30 13:46:08.166621 sshd[3770]: Accepted publickey for core from 10.0.0.1 port 35588 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:08.168036 sshd[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:08.172090 systemd-logind[1456]: New session 21 of user core. Jan 30 13:46:08.180924 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:46:08.286032 sshd[3770]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:08.290547 systemd[1]: sshd@20-10.0.0.96:22-10.0.0.1:35588.service: Deactivated successfully. Jan 30 13:46:08.292646 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:46:08.293381 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:46:08.294510 systemd-logind[1456]: Removed session 21.