Jan 29 11:54:03.981384 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 11:54:03.981433 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:54:03.981488 kernel: BIOS-provided physical RAM map: Jan 29 11:54:03.981517 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 11:54:03.981526 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 29 11:54:03.981534 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 29 11:54:03.981545 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 29 11:54:03.981553 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 29 11:54:03.981562 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 29 11:54:03.981571 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 29 11:54:03.981585 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 29 11:54:03.981600 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 29 11:54:03.981612 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 29 11:54:03.981621 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 29 11:54:03.981641 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 29 11:54:03.981651 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 29 11:54:03.981666 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 29 11:54:03.981675 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 29 11:54:03.981685 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 29 11:54:03.981694 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:54:03.981704 kernel: NX (Execute Disable) protection: active Jan 29 11:54:03.981713 kernel: APIC: Static calls initialized Jan 29 11:54:03.981722 kernel: efi: EFI v2.7 by EDK II Jan 29 11:54:03.981732 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 29 11:54:03.981741 kernel: SMBIOS 2.8 present. Jan 29 11:54:03.981758 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 29 11:54:03.981768 kernel: Hypervisor detected: KVM Jan 29 11:54:03.981860 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:54:03.981872 kernel: kvm-clock: using sched offset of 5698329833 cycles Jan 29 11:54:03.981883 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:54:03.981893 kernel: tsc: Detected 2794.750 MHz processor Jan 29 11:54:03.981902 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:54:03.981913 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:54:03.981923 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 29 11:54:03.981932 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 11:54:03.981942 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:54:03.981957 kernel: Using GB pages for direct mapping Jan 29 11:54:03.981966 kernel: Secure boot disabled Jan 29 11:54:03.981976 kernel: ACPI: Early table checksum verification disabled Jan 29 11:54:03.981986 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 29 11:54:03.982001 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:54:03.982012 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:54:03.982022 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:54:03.982036 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 29 11:54:03.982046 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:54:03.982060 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:54:03.982071 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:54:03.982081 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:54:03.982091 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:54:03.982102 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 29 11:54:03.982116 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 29 11:54:03.982126 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 29 11:54:03.982136 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 29 11:54:03.982146 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 29 11:54:03.982166 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 29 11:54:03.982177 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 29 11:54:03.982192 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 29 11:54:03.982223 kernel: No NUMA configuration found Jan 29 11:54:03.982238 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 29 11:54:03.982253 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 29 11:54:03.982263 kernel: Zone ranges: Jan 29 11:54:03.982274 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:54:03.982284 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 29 11:54:03.982294 kernel: Normal empty Jan 29 11:54:03.982304 kernel: Movable zone start for each node Jan 29 11:54:03.982314 kernel: Early memory node ranges Jan 29 11:54:03.982324 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 11:54:03.982334 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 29 11:54:03.982348 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 29 11:54:03.982365 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 29 11:54:03.982376 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 29 11:54:03.982387 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 29 11:54:03.982400 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 29 11:54:03.982410 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:54:03.982426 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 11:54:03.982449 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 29 11:54:03.982460 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:54:03.982470 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 29 11:54:03.982486 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 29 11:54:03.982496 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 29 11:54:03.982506 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:54:03.982517 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:54:03.982539 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:54:03.982550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:54:03.982560 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:54:03.982570 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:54:03.982583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:54:03.982598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:54:03.982608 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:54:03.982618 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:54:03.982629 kernel: TSC deadline timer available Jan 29 11:54:03.982639 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:54:03.982649 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:54:03.982659 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:54:03.982669 kernel: kvm-guest: setup PV sched yield Jan 29 11:54:03.982679 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 29 11:54:03.982693 kernel: Booting paravirtualized kernel on KVM Jan 29 11:54:03.982703 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:54:03.982714 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:54:03.982724 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:54:03.982734 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:54:03.982744 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:54:03.982763 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:54:03.982773 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:54:03.982799 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:54:03.982819 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:54:03.982841 kernel: random: crng init done Jan 29 11:54:03.982862 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:54:03.982872 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:54:03.982882 kernel: Fallback order for Node 0: 0 Jan 29 11:54:03.982891 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 29 11:54:03.982900 kernel: Policy zone: DMA32 Jan 29 11:54:03.982910 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:54:03.982930 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Jan 29 11:54:03.982941 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:54:03.982950 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 11:54:03.982959 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:54:03.982969 kernel: Dynamic Preempt: voluntary Jan 29 11:54:03.982990 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:54:03.983004 kernel: rcu: RCU event tracing is enabled. Jan 29 11:54:03.983015 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:54:03.983025 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:54:03.983035 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:54:03.983045 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:54:03.983055 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:54:03.983069 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:54:03.983080 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:54:03.983095 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:54:03.983106 kernel: Console: colour dummy device 80x25 Jan 29 11:54:03.983117 kernel: printk: console [ttyS0] enabled Jan 29 11:54:03.983132 kernel: ACPI: Core revision 20230628 Jan 29 11:54:03.983143 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:54:03.983154 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:54:03.983165 kernel: x2apic enabled Jan 29 11:54:03.983175 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:54:03.983186 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:54:03.983197 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:54:03.983212 kernel: kvm-guest: setup PV IPIs Jan 29 11:54:03.983233 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:54:03.983251 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:54:03.983262 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 29 11:54:03.983272 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:54:03.983283 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:54:03.983294 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:54:03.983305 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:54:03.983315 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:54:03.983328 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:54:03.983340 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:54:03.983357 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:54:03.983369 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:54:03.983382 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:54:03.983393 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:54:03.983407 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:54:03.983419 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:54:03.983430 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:54:03.983441 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:54:03.983455 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:54:03.983466 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:54:03.983477 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:54:03.983488 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:54:03.983499 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:54:03.983510 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:54:03.983521 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:54:03.983531 kernel: landlock: Up and running. Jan 29 11:54:03.983542 kernel: SELinux: Initializing. Jan 29 11:54:03.983557 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:54:03.983568 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:54:03.983579 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:54:03.983590 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:54:03.983601 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:54:03.983612 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:54:03.983623 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:54:03.983633 kernel: ... version: 0 Jan 29 11:54:03.983648 kernel: ... bit width: 48 Jan 29 11:54:03.983659 kernel: ... generic registers: 6 Jan 29 11:54:03.983670 kernel: ... value mask: 0000ffffffffffff Jan 29 11:54:03.983681 kernel: ... max period: 00007fffffffffff Jan 29 11:54:03.983691 kernel: ... fixed-purpose events: 0 Jan 29 11:54:03.983702 kernel: ... event mask: 000000000000003f Jan 29 11:54:03.983713 kernel: signal: max sigframe size: 1776 Jan 29 11:54:03.983723 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:54:03.983735 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:54:03.983753 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:54:03.983769 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:54:03.983800 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:54:03.983825 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:54:03.983837 kernel: smpboot: Max logical packages: 1 Jan 29 11:54:03.983848 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 29 11:54:03.983858 kernel: devtmpfs: initialized Jan 29 11:54:03.983869 kernel: x86/mm: Memory block size: 128MB Jan 29 11:54:03.983880 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 29 11:54:03.983891 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 29 11:54:03.983907 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 29 11:54:03.983917 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 29 11:54:03.983928 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 29 11:54:03.983940 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:54:03.983951 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:54:03.983961 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:54:03.983972 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:54:03.983983 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:54:03.983994 kernel: audit: type=2000 audit(1738151643.591:1): state=initialized audit_enabled=0 res=1 Jan 29 11:54:03.984008 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:54:03.984019 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:54:03.984030 kernel: cpuidle: using governor menu Jan 29 11:54:03.984041 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:54:03.984051 kernel: dca service started, version 1.12.1 Jan 29 11:54:03.984062 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:54:03.984073 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:54:03.984084 kernel: PCI: Using configuration type 1 for base access Jan 29 11:54:03.984095 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:54:03.984109 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:54:03.984120 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:54:03.984131 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:54:03.984142 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:54:03.984152 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:54:03.984163 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:54:03.984174 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:54:03.984185 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:54:03.984196 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:54:03.984212 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:54:03.984223 kernel: ACPI: Interpreter enabled Jan 29 11:54:03.984233 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:54:03.984244 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:54:03.984255 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:54:03.984266 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:54:03.984277 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:54:03.984287 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:54:03.984594 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:54:03.984830 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:54:03.985003 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:54:03.985019 kernel: PCI host bridge to bus 0000:00 Jan 29 11:54:03.985216 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:54:03.985381 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:54:03.985541 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:54:03.985708 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 11:54:03.985902 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:54:03.986066 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 29 11:54:03.986244 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:54:03.986477 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:54:03.986700 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:54:03.986921 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 29 11:54:03.987107 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 29 11:54:03.987282 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 29 11:54:03.987450 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 29 11:54:03.987629 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:54:03.987876 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:54:03.988052 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 29 11:54:03.988230 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 29 11:54:03.988407 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 29 11:54:03.988640 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:54:03.988845 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 29 11:54:03.989019 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 29 11:54:03.989190 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 29 11:54:03.989383 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:54:03.989576 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 29 11:54:03.989779 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 29 11:54:03.990072 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 29 11:54:03.990254 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 29 11:54:03.990452 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:54:03.990622 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:54:03.990841 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:54:03.991020 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 29 11:54:03.991184 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 29 11:54:03.991391 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:54:03.991563 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 29 11:54:03.991579 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:54:03.991591 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:54:03.991602 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:54:03.991619 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:54:03.991630 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:54:03.991641 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:54:03.991652 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:54:03.991664 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:54:03.991675 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:54:03.991685 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:54:03.991695 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:54:03.991705 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:54:03.991721 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:54:03.991733 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:54:03.991752 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:54:03.991764 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:54:03.991775 kernel: iommu: Default domain type: Translated Jan 29 11:54:03.991802 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:54:03.991814 kernel: efivars: Registered efivars operations Jan 29 11:54:03.991825 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:54:03.991836 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:54:03.991852 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 29 11:54:03.991863 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 29 11:54:03.991875 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 29 11:54:03.991886 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 29 11:54:03.992064 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:54:03.992238 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:54:03.992421 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:54:03.992438 kernel: vgaarb: loaded Jan 29 11:54:03.992454 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:54:03.992466 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:54:03.992477 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:54:03.992488 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:54:03.992499 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:54:03.992510 kernel: pnp: PnP ACPI init Jan 29 11:54:03.992710 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:54:03.992728 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:54:03.992740 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:54:03.992765 kernel: NET: Registered PF_INET protocol family Jan 29 11:54:03.992777 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:54:03.992803 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:54:03.992815 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:54:03.992826 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:54:03.992837 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:54:03.992848 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:54:03.992859 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:54:03.992874 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:54:03.992885 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:54:03.992897 kernel: NET: Registered PF_XDP protocol family Jan 29 11:54:03.993073 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 29 11:54:03.993247 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 29 11:54:03.993412 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:54:03.993565 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:54:03.993719 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:54:03.993918 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 11:54:03.994093 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:54:03.994253 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 29 11:54:03.994270 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:54:03.994282 kernel: Initialise system trusted keyrings Jan 29 11:54:03.994294 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:54:03.994305 kernel: Key type asymmetric registered Jan 29 11:54:03.994316 kernel: Asymmetric key parser 'x509' registered Jan 29 11:54:03.994327 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:54:03.994345 kernel: io scheduler mq-deadline registered Jan 29 11:54:03.994356 kernel: io scheduler kyber registered Jan 29 11:54:03.994367 kernel: io scheduler bfq registered Jan 29 11:54:03.994379 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:54:03.994389 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:54:03.994397 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:54:03.994405 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:54:03.994413 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:54:03.994421 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:54:03.994432 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:54:03.994440 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:54:03.994448 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:54:03.994626 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:54:03.994763 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:54:03.994957 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:54:03 UTC (1738151643) Jan 29 11:54:03.995081 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 11:54:03.995092 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 29 11:54:03.995105 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:54:03.995113 kernel: efifb: probing for efifb Jan 29 11:54:03.995121 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 29 11:54:03.995129 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 29 11:54:03.995137 kernel: efifb: scrolling: redraw Jan 29 11:54:03.995144 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 29 11:54:03.995153 kernel: Console: switching to colour frame buffer device 100x37 Jan 29 11:54:03.995178 kernel: fb0: EFI VGA frame buffer device Jan 29 11:54:03.995189 kernel: pstore: Using crash dump compression: deflate Jan 29 11:54:03.995200 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 11:54:03.995208 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:54:03.995216 kernel: Segment Routing with IPv6 Jan 29 11:54:03.995224 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:54:03.995232 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:54:03.995240 kernel: Key type dns_resolver registered Jan 29 11:54:03.995248 kernel: IPI shorthand broadcast: enabled Jan 29 11:54:03.995256 kernel: sched_clock: Marking stable (1024003491, 125874355)->(1262714099, -112836253) Jan 29 11:54:03.995264 kernel: registered taskstats version 1 Jan 29 11:54:03.995275 kernel: Loading compiled-in X.509 certificates Jan 29 11:54:03.995283 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 11:54:03.995291 kernel: Key type .fscrypt registered Jan 29 11:54:03.995300 kernel: Key type fscrypt-provisioning registered Jan 29 11:54:03.995308 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:54:03.995316 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:54:03.995324 kernel: ima: No architecture policies found Jan 29 11:54:03.995332 kernel: clk: Disabling unused clocks Jan 29 11:54:03.995340 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 11:54:03.995354 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:54:03.995362 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 11:54:03.995370 kernel: Run /init as init process Jan 29 11:54:03.995378 kernel: with arguments: Jan 29 11:54:03.995386 kernel: /init Jan 29 11:54:03.995394 kernel: with environment: Jan 29 11:54:03.995402 kernel: HOME=/ Jan 29 11:54:03.995410 kernel: TERM=linux Jan 29 11:54:03.995418 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:54:03.995431 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:54:03.995442 systemd[1]: Detected virtualization kvm. Jan 29 11:54:03.995450 systemd[1]: Detected architecture x86-64. Jan 29 11:54:03.995459 systemd[1]: Running in initrd. Jan 29 11:54:03.995472 systemd[1]: No hostname configured, using default hostname. Jan 29 11:54:03.995481 systemd[1]: Hostname set to <localhost>. Jan 29 11:54:03.995489 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:54:03.995498 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:54:03.995506 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:54:03.995515 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:54:03.995524 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:54:03.995533 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:54:03.995544 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:54:03.995553 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:54:03.995563 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:54:03.995572 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:54:03.995580 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:54:03.995589 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:54:03.995600 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:54:03.995609 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:54:03.995617 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:54:03.995626 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:54:03.995634 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:54:03.995642 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:54:03.995651 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:54:03.995660 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:54:03.995668 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:54:03.995680 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:54:03.995688 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:54:03.995697 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:54:03.995705 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:54:03.995714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:54:03.995722 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:54:03.995730 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:54:03.995739 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:54:03.995755 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:54:03.995767 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:54:03.995776 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:54:03.995837 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:54:03.995846 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:54:03.995876 systemd-journald[192]: Collecting audit messages is disabled. Jan 29 11:54:03.995901 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:54:03.995910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:54:03.995918 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:54:03.995930 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:54:03.995939 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:54:03.995947 systemd-journald[192]: Journal started Jan 29 11:54:03.995966 systemd-journald[192]: Runtime Journal (/run/log/journal/2ed92db7716a43b985ec1c3ef8c5c027) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:54:03.980232 systemd-modules-load[194]: Inserted module 'overlay' Jan 29 11:54:04.004013 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:54:04.008836 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:54:04.010978 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:54:04.014132 kernel: Bridge firewalling registered Jan 29 11:54:04.012349 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:54:04.014123 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 29 11:54:04.015192 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:54:04.018433 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:54:04.023933 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:54:04.033147 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:54:04.048353 dracut-cmdline[221]: dracut-dracut-053 Jan 29 11:54:04.048451 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:54:04.053529 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:54:04.050053 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:54:04.064001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:54:04.097328 systemd-resolved[240]: Positive Trust Anchors: Jan 29 11:54:04.097348 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:54:04.097381 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:54:04.100179 systemd-resolved[240]: Defaulting to hostname 'linux'. Jan 29 11:54:04.101592 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:54:04.108036 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:54:04.152839 kernel: SCSI subsystem initialized Jan 29 11:54:04.162842 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:54:04.174826 kernel: iscsi: registered transport (tcp) Jan 29 11:54:04.198866 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:54:04.198959 kernel: QLogic iSCSI HBA Driver Jan 29 11:54:04.260538 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:54:04.273943 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:54:04.302747 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:54:04.302864 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:54:04.302877 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:54:04.351844 kernel: raid6: avx2x4 gen() 25568 MB/s Jan 29 11:54:04.368845 kernel: raid6: avx2x2 gen() 27333 MB/s Jan 29 11:54:04.385998 kernel: raid6: avx2x1 gen() 22004 MB/s Jan 29 11:54:04.386095 kernel: raid6: using algorithm avx2x2 gen() 27333 MB/s Jan 29 11:54:04.404032 kernel: raid6: .... xor() 19155 MB/s, rmw enabled Jan 29 11:54:04.404143 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:54:04.428840 kernel: xor: automatically using best checksumming function avx Jan 29 11:54:04.596825 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:54:04.610926 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:54:04.619938 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:54:04.643826 systemd-udevd[411]: Using default interface naming scheme 'v255'. Jan 29 11:54:04.649983 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:54:04.657948 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:54:04.678155 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 29 11:54:04.721459 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:54:04.733021 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:54:04.828812 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:54:04.851094 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:54:04.872820 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:54:04.908401 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:54:04.908632 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:54:04.908651 kernel: GPT:9289727 != 19775487 Jan 29 11:54:04.908666 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:54:04.908681 kernel: GPT:9289727 != 19775487 Jan 29 11:54:04.908696 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:54:04.908711 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:54:04.908737 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:54:04.873554 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:54:04.876468 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:54:04.879573 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:54:04.881269 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:54:04.902131 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:54:04.930658 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:54:04.933886 kernel: libata version 3.00 loaded. Jan 29 11:54:04.941958 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:54:04.942194 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:54:04.947582 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:54:04.950230 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:54:04.950468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:54:04.956543 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:54:05.001953 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:54:05.001980 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:54:05.002222 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:54:05.002448 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:54:05.002467 kernel: AES CTR mode by8 optimization enabled Jan 29 11:54:05.002483 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (471) Jan 29 11:54:05.002500 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (467) Jan 29 11:54:05.002517 kernel: scsi host0: ahci Jan 29 11:54:05.002776 kernel: scsi host1: ahci Jan 29 11:54:05.003030 kernel: scsi host2: ahci Jan 29 11:54:05.003263 kernel: scsi host3: ahci Jan 29 11:54:05.003486 kernel: scsi host4: ahci Jan 29 11:54:05.003732 kernel: scsi host5: ahci Jan 29 11:54:05.004005 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 29 11:54:05.004024 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 29 11:54:05.004040 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 29 11:54:05.004057 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 29 11:54:05.004080 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 29 11:54:05.004096 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 29 11:54:04.951708 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:54:04.967255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:54:05.005238 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:54:05.012861 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:54:05.027559 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:54:05.028121 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:54:05.036323 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:54:05.044048 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:54:05.058306 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:54:05.058404 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:54:05.060708 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:54:05.064887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:54:05.086769 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:54:05.099977 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:54:05.122067 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:54:05.219629 disk-uuid[553]: Primary Header is updated. Jan 29 11:54:05.219629 disk-uuid[553]: Secondary Entries is updated. Jan 29 11:54:05.219629 disk-uuid[553]: Secondary Header is updated. Jan 29 11:54:05.224806 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:54:05.230817 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:54:05.314817 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:54:05.316830 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:54:05.316902 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:54:05.316917 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:54:05.317824 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:54:05.318816 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:54:05.319821 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:54:05.320983 kernel: ata3.00: applying bridge limits Jan 29 11:54:05.320995 kernel: ata3.00: configured for UDMA/100 Jan 29 11:54:05.321815 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:54:05.377997 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:54:05.390771 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:54:05.390810 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:54:06.234825 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:54:06.235476 disk-uuid[569]: The operation has completed successfully. Jan 29 11:54:06.268040 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:54:06.268217 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:54:06.299005 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:54:06.303097 sh[596]: Success Jan 29 11:54:06.316819 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:54:06.353849 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:54:06.363531 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:54:06.368153 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:54:06.381489 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 11:54:06.381550 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:54:06.381569 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:54:06.382668 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:54:06.383594 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:54:06.390504 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:54:06.391888 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:54:06.401091 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:54:06.404148 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:54:06.419338 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:54:06.419418 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:54:06.419430 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:54:06.423836 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:54:06.435249 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:54:06.437075 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:54:06.447366 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:54:06.458116 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:54:06.529049 ignition[688]: Ignition 2.19.0 Jan 29 11:54:06.530203 ignition[688]: Stage: fetch-offline Jan 29 11:54:06.530247 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:54:06.530271 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:54:06.530391 ignition[688]: parsed url from cmdline: "" Jan 29 11:54:06.530396 ignition[688]: no config URL provided Jan 29 11:54:06.530403 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:54:06.530415 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:54:06.530452 ignition[688]: op(1): [started] loading QEMU firmware config module Jan 29 11:54:06.530458 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:54:06.597047 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:54:06.599346 ignition[688]: op(1): [finished] loading QEMU firmware config module Jan 29 11:54:06.606947 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:54:06.617240 ignition[688]: parsing config with SHA512: b2b0171bb21e2101169174a61d8c8f91bac7e89f292e802adfa7af463c7e258a43c0a761cb506705371424185dcf5d068bd6c0ee0698e79d72f8e7a9137cb046 Jan 29 11:54:06.624250 unknown[688]: fetched base config from "system" Jan 29 11:54:06.625191 ignition[688]: fetch-offline: fetch-offline passed Jan 29 11:54:06.624272 unknown[688]: fetched user config from "qemu" Jan 29 11:54:06.625550 ignition[688]: Ignition finished successfully Jan 29 11:54:06.634579 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:54:06.636678 systemd-networkd[784]: lo: Link UP Jan 29 11:54:06.636696 systemd-networkd[784]: lo: Gained carrier Jan 29 11:54:06.638577 systemd-networkd[784]: Enumeration completed Jan 29 11:54:06.638704 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:54:06.639094 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:54:06.639099 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:54:06.639635 systemd[1]: Reached target network.target - Network. Jan 29 11:54:06.641034 systemd-networkd[784]: eth0: Link UP Jan 29 11:54:06.641039 systemd-networkd[784]: eth0: Gained carrier Jan 29 11:54:06.641046 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:54:06.642761 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:54:06.655192 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:54:06.659910 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:54:06.670950 ignition[787]: Ignition 2.19.0 Jan 29 11:54:06.670963 ignition[787]: Stage: kargs Jan 29 11:54:06.671156 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:54:06.671169 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:54:06.672113 ignition[787]: kargs: kargs passed Jan 29 11:54:06.672163 ignition[787]: Ignition finished successfully Jan 29 11:54:06.682212 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:54:06.696995 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:54:06.715529 ignition[796]: Ignition 2.19.0 Jan 29 11:54:06.715545 ignition[796]: Stage: disks Jan 29 11:54:06.715751 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:54:06.715763 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:54:06.719952 ignition[796]: disks: disks passed Jan 29 11:54:06.720612 ignition[796]: Ignition finished successfully Jan 29 11:54:06.723873 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:54:06.724719 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:54:06.725036 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:54:06.725360 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:54:06.725702 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:54:06.726066 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:54:06.742959 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:54:06.785178 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:54:07.013538 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:54:07.031932 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:54:07.134830 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 11:54:07.135579 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:54:07.137317 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:54:07.150864 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:54:07.152756 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:54:07.154189 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:54:07.154243 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:54:07.165685 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Jan 29 11:54:07.165716 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:54:07.165728 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:54:07.165739 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:54:07.154272 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:54:07.169031 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:54:07.161724 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:54:07.166838 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:54:07.171357 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:54:07.202724 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:54:07.207741 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:54:07.211658 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:54:07.215544 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:54:07.297771 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:54:07.304941 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:54:07.306917 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:54:07.313812 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:54:07.334283 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:54:07.380644 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:54:07.398961 ignition[931]: INFO : Ignition 2.19.0 Jan 29 11:54:07.398961 ignition[931]: INFO : Stage: mount Jan 29 11:54:07.400942 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:54:07.400942 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:54:07.400942 ignition[931]: INFO : mount: mount passed Jan 29 11:54:07.400942 ignition[931]: INFO : Ignition finished successfully Jan 29 11:54:07.407113 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:54:07.418892 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:54:07.427638 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:54:07.441805 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Jan 29 11:54:07.444259 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:54:07.444334 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:54:07.444351 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:54:07.448811 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:54:07.451193 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:54:07.487750 ignition[956]: INFO : Ignition 2.19.0 Jan 29 11:54:07.487750 ignition[956]: INFO : Stage: files Jan 29 11:54:07.495432 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:54:07.495432 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:54:07.498486 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:54:07.500693 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:54:07.500693 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:54:07.504745 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:54:07.506201 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:54:07.506201 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:54:07.505428 unknown[956]: wrote ssh authorized keys file for user: core Jan 29 11:54:07.510209 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:54:07.510209 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:54:07.561145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:54:07.812673 systemd-networkd[784]: eth0: Gained IPv6LL Jan 29 11:54:07.817530 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:54:07.819875 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:54:07.821893 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:54:07.824350 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:54:07.826350 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:54:07.828198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:54:07.830219 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:54:07.832184 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:54:07.834245 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:54:07.836486 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:54:07.838542 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:54:07.840501 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:54:07.843546 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:54:07.846419 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:54:07.848883 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:54:08.338235 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:54:08.770614 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:54:08.770614 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:54:08.790028 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:54:08.792273 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:54:08.792273 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:54:08.792273 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 11:54:08.796688 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:54:08.798609 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:54:08.798609 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 11:54:08.801803 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:54:08.844464 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:54:08.850934 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:54:08.852642 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:54:08.852642 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:54:08.852642 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:54:08.852642 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:54:08.852642 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:54:08.852642 ignition[956]: INFO : files: files passed Jan 29 11:54:08.852642 ignition[956]: INFO : Ignition finished successfully Jan 29 11:54:08.876525 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:54:08.887180 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:54:08.888965 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:54:08.897452 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:54:08.898812 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:54:08.903224 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:54:08.907704 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:54:08.907704 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:54:08.911264 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:54:08.915442 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:54:08.916288 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:54:08.938053 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:54:08.970580 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:54:08.970804 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:54:08.972320 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:54:08.974716 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:54:08.975262 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:54:08.976431 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:54:09.001687 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:54:09.010023 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:54:09.024767 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:54:09.025222 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:54:09.027955 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:54:09.028450 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:54:09.028659 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:54:09.032553 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:54:09.033122 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:54:09.033484 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:54:09.033886 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:54:09.034416 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:54:09.034829 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:54:09.035355 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:54:09.035801 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:54:09.036326 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:54:09.036682 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:54:09.037214 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:54:09.037358 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:54:09.057815 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:54:09.058374 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:54:09.060618 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:54:09.062730 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:54:09.063352 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:54:09.063537 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:54:09.067612 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:54:09.067814 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:54:09.070268 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:54:09.072416 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:54:09.077918 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:54:09.078618 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:54:09.079154 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:54:09.079553 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:54:09.079724 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:54:09.085826 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:54:09.085984 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:54:09.087860 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:54:09.088041 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:54:09.089709 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:54:09.089897 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:54:09.109233 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:54:09.111516 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:54:09.112058 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:54:09.112190 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:54:09.114072 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:54:09.114239 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:54:09.122133 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:54:09.122299 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:54:09.136953 ignition[1011]: INFO : Ignition 2.19.0 Jan 29 11:54:09.136953 ignition[1011]: INFO : Stage: umount Jan 29 11:54:09.139126 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:54:09.139126 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:54:09.142046 ignition[1011]: INFO : umount: umount passed Jan 29 11:54:09.143057 ignition[1011]: INFO : Ignition finished successfully Jan 29 11:54:09.146438 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:54:09.146599 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:54:09.148580 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:54:09.149035 systemd[1]: Stopped target network.target - Network. Jan 29 11:54:09.150055 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:54:09.150114 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:54:09.150454 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:54:09.150523 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:54:09.151048 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:54:09.151111 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:54:09.151431 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:54:09.151496 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:54:09.153335 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:54:09.161852 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:54:09.169947 systemd-networkd[784]: eth0: DHCPv6 lease lost Jan 29 11:54:09.174156 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:54:09.174373 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:54:09.175771 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:54:09.175852 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:54:09.209018 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:54:09.210097 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:54:09.210185 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:54:09.212508 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:54:09.213526 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:54:09.213719 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:54:09.219818 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:54:09.219931 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:54:09.220651 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:54:09.220715 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:54:09.221429 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:54:09.221490 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:54:09.277358 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:54:09.277649 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:54:09.285308 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:54:09.285423 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:54:09.286200 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:54:09.286282 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:54:09.288279 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:54:09.288353 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:54:09.291706 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:54:09.291830 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:54:09.292635 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:54:09.292706 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:54:09.295282 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:54:09.302084 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:54:09.302167 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:54:09.306131 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:54:09.306219 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:54:09.320517 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:54:09.320608 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:54:09.323503 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:54:09.323572 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:54:09.326831 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:54:09.326983 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:54:09.329106 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:54:09.329243 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:54:09.586073 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:54:09.586263 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:54:09.587438 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:54:09.589612 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:54:09.589698 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:54:09.612119 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:54:09.620705 systemd[1]: Switching root. Jan 29 11:54:09.657900 systemd-journald[192]: Journal stopped Jan 29 11:54:11.094056 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 29 11:54:11.094146 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:54:11.094168 kernel: SELinux: policy capability open_perms=1 Jan 29 11:54:11.094179 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:54:11.094191 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:54:11.094202 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:54:11.094214 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:54:11.094231 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:54:11.094253 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:54:11.094265 kernel: audit: type=1403 audit(1738151650.063:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:54:11.094278 systemd[1]: Successfully loaded SELinux policy in 46.102ms. Jan 29 11:54:11.094303 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.039ms. Jan 29 11:54:11.094317 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:54:11.094329 systemd[1]: Detected virtualization kvm. Jan 29 11:54:11.094341 systemd[1]: Detected architecture x86-64. Jan 29 11:54:11.094353 systemd[1]: Detected first boot. Jan 29 11:54:11.094365 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:54:11.094383 zram_generator::config[1055]: No configuration found. Jan 29 11:54:11.094396 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:54:11.094409 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:54:11.094422 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:54:11.094439 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:54:11.094458 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:54:11.094470 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:54:11.094482 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:54:11.094500 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:54:11.094512 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:54:11.094525 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:54:11.094537 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:54:11.094549 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:54:11.094561 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:54:11.094573 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:54:11.094594 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:54:11.094607 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:54:11.094626 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:54:11.094639 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:54:11.094663 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:54:11.094697 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:54:11.094727 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:54:11.094755 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:54:11.094841 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:54:11.094880 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:54:11.094912 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:54:11.094931 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:54:11.094947 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:54:11.094962 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:54:11.094974 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:54:11.094986 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:54:11.094998 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:54:11.095010 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:54:11.095022 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:54:11.095042 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:54:11.095061 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:54:11.095073 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:54:11.095085 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:54:11.095097 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:11.095110 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:54:11.095122 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:54:11.095134 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:54:11.095153 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:54:11.095165 systemd[1]: Reached target machines.target - Containers. Jan 29 11:54:11.095177 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:54:11.095189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:54:11.095207 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:54:11.095220 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:54:11.095232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:54:11.095244 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:54:11.095257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:54:11.095274 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:54:11.095287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:54:11.095299 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:54:11.095312 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:54:11.095324 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:54:11.095339 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:54:11.095352 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:54:11.095364 kernel: fuse: init (API version 7.39) Jan 29 11:54:11.095380 kernel: loop: module loaded Jan 29 11:54:11.095392 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:54:11.095404 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:54:11.095416 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:54:11.095429 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:54:11.095468 systemd-journald[1122]: Collecting audit messages is disabled. Jan 29 11:54:11.095491 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:54:11.095503 systemd-journald[1122]: Journal started Jan 29 11:54:11.095535 systemd-journald[1122]: Runtime Journal (/run/log/journal/2ed92db7716a43b985ec1c3ef8c5c027) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:54:10.806575 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:54:10.823518 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:54:10.824037 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:54:11.101232 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:54:11.101279 systemd[1]: Stopped verity-setup.service. Jan 29 11:54:11.103086 kernel: ACPI: bus type drm_connector registered Jan 29 11:54:11.103170 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:11.112194 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:54:11.113150 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:54:11.114710 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:54:11.116276 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:54:11.118066 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:54:11.119925 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:54:11.121716 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:54:11.123446 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:54:11.132140 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:54:11.132419 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:54:11.134566 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:54:11.134849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:54:11.136804 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:54:11.137065 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:54:11.139081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:54:11.139328 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:54:11.141480 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:54:11.141743 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:54:11.143742 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:54:11.144005 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:54:11.146090 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:54:11.148128 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:54:11.150305 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:54:11.155506 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:54:11.177197 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:54:11.190040 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:54:11.194080 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:54:11.195665 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:54:11.195725 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:54:11.198673 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:54:11.202109 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:54:11.206373 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:54:11.208230 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:54:11.211256 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:54:11.215110 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:54:11.216870 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:54:11.220310 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:54:11.222172 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:54:11.224312 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:54:11.240213 systemd-journald[1122]: Time spent on flushing to /var/log/journal/2ed92db7716a43b985ec1c3ef8c5c027 is 24.334ms for 994 entries. Jan 29 11:54:11.240213 systemd-journald[1122]: System Journal (/var/log/journal/2ed92db7716a43b985ec1c3ef8c5c027) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:54:11.306294 systemd-journald[1122]: Received client request to flush runtime journal. Jan 29 11:54:11.306358 kernel: loop0: detected capacity change from 0 to 142488 Jan 29 11:54:11.230926 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:54:11.233887 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:54:11.238355 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:54:11.270305 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:54:11.272349 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:54:11.274509 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:54:11.276549 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:54:11.285711 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:54:11.298488 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:54:11.303265 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:54:11.306321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:54:11.310479 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:54:11.327859 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:54:11.333947 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:54:11.336153 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 29 11:54:11.336217 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 29 11:54:11.343259 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:54:11.348052 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:54:11.350643 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:54:11.359868 kernel: loop1: detected capacity change from 0 to 205544 Jan 29 11:54:11.362147 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:54:11.415838 kernel: loop2: detected capacity change from 0 to 140768 Jan 29 11:54:11.421103 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:54:11.442548 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:54:11.472599 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 29 11:54:11.473128 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 29 11:54:11.482472 kernel: loop3: detected capacity change from 0 to 142488 Jan 29 11:54:11.482498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:54:11.494871 kernel: loop4: detected capacity change from 0 to 205544 Jan 29 11:54:11.550835 kernel: loop5: detected capacity change from 0 to 140768 Jan 29 11:54:11.563246 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:54:11.564174 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 29 11:54:11.574123 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:54:11.574333 systemd[1]: Reloading... Jan 29 11:54:11.680813 zram_generator::config[1227]: No configuration found. Jan 29 11:54:11.866731 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:54:11.888797 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:54:11.924256 systemd[1]: Reloading finished in 349 ms. Jan 29 11:54:11.960750 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:54:11.962433 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:54:11.981997 systemd[1]: Starting ensure-sysext.service... Jan 29 11:54:11.984329 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:54:11.992222 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:54:11.992233 systemd[1]: Reloading... Jan 29 11:54:12.021284 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:54:12.022070 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:54:12.023234 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:54:12.026132 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jan 29 11:54:12.026305 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jan 29 11:54:12.031855 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:54:12.033812 systemd-tmpfiles[1263]: Skipping /boot Jan 29 11:54:12.068826 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:54:12.068843 systemd-tmpfiles[1263]: Skipping /boot Jan 29 11:54:12.079927 zram_generator::config[1285]: No configuration found. Jan 29 11:54:12.211879 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:54:12.264485 systemd[1]: Reloading finished in 271 ms. Jan 29 11:54:12.298375 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:54:12.307739 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:54:12.311256 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:54:12.314097 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:54:12.321247 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:54:12.324381 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:54:12.328405 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:12.329128 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:54:12.333906 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:54:12.337579 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:54:12.341899 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:54:12.343359 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:54:12.343514 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:12.358226 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:54:12.367694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:54:12.368338 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:54:12.372400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:54:12.372648 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:54:12.375581 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:54:12.375948 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:54:12.378291 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:54:12.384305 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:54:12.393198 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:54:12.410324 augenrules[1355]: No rules Jan 29 11:54:12.412682 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:54:12.421138 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:12.421459 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:54:12.434225 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:54:12.439122 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:54:12.442290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:54:12.443867 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:54:12.450856 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:54:12.455040 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:54:12.456423 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:12.457859 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:54:12.460423 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:54:12.462670 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:54:12.463086 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:54:12.465232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:54:12.465489 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:54:12.467488 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:54:12.467765 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:54:12.476087 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:54:12.483761 systemd[1]: Finished ensure-sysext.service. Jan 29 11:54:12.485603 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:12.485811 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:54:12.490930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:54:12.493929 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:54:12.497898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:54:12.501205 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:54:12.503170 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:54:12.508557 systemd-udevd[1371]: Using default interface naming scheme 'v255'. Jan 29 11:54:12.508870 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:54:12.510166 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:54:12.510213 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:54:12.511086 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:54:12.511312 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:54:12.513188 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:54:12.513405 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:54:12.515461 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:54:12.515689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:54:12.518315 systemd-resolved[1331]: Positive Trust Anchors: Jan 29 11:54:12.518658 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:54:12.518737 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:54:12.520487 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:54:12.520714 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:54:12.523280 systemd-resolved[1331]: Defaulting to hostname 'linux'. Jan 29 11:54:12.525996 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:54:12.528972 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:54:12.530486 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:54:12.530575 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:54:12.535010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:54:12.539867 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:54:12.592817 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1404) Jan 29 11:54:12.680907 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:54:12.683145 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:54:12.683197 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:54:12.699661 systemd-networkd[1393]: lo: Link UP Jan 29 11:54:12.699677 systemd-networkd[1393]: lo: Gained carrier Jan 29 11:54:12.703558 systemd-networkd[1393]: Enumeration completed Jan 29 11:54:12.703657 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:54:12.704031 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:54:12.704036 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:54:12.705315 systemd[1]: Reached target network.target - Network. Jan 29 11:54:12.705712 systemd-networkd[1393]: eth0: Link UP Jan 29 11:54:12.705716 systemd-networkd[1393]: eth0: Gained carrier Jan 29 11:54:12.705729 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:54:12.713073 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:54:12.722930 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:54:12.723834 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Jan 29 11:54:13.869981 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:54:13.870045 systemd-timesyncd[1383]: Initial clock synchronization to Wed 2025-01-29 11:54:13.869876 UTC. Jan 29 11:54:13.870450 systemd-resolved[1331]: Clock change detected. Flushing caches. Jan 29 11:54:13.887916 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:54:13.897297 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 11:54:13.897568 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:54:13.904265 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:54:13.916432 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:54:13.919395 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 11:54:13.930182 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 29 11:54:13.931313 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:54:13.931531 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:54:13.946525 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:54:13.957091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:54:13.966824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:54:13.967134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:54:13.968259 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:54:13.992636 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:54:14.049735 kernel: kvm_amd: TSC scaling supported Jan 29 11:54:14.049811 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:54:14.049830 kernel: kvm_amd: Nested Paging enabled Jan 29 11:54:14.050307 kernel: kvm_amd: LBR virtualization supported Jan 29 11:54:14.051654 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:54:14.051689 kernel: kvm_amd: Virtual GIF supported Jan 29 11:54:14.073274 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:54:14.104101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:54:14.112453 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:54:14.123419 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:54:14.137233 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:54:14.176368 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:54:14.178146 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:54:14.179431 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:54:14.180760 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:54:14.182322 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:54:14.183997 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:54:14.185441 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:54:14.186884 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:54:14.188344 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:54:14.188397 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:54:14.189427 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:54:14.191098 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:54:14.193856 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:54:14.208474 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:54:14.210968 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:54:14.212708 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:54:14.214019 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:54:14.215135 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:54:14.215599 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:54:14.215624 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:54:14.216666 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:54:14.219001 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:54:14.223297 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:54:14.223724 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:54:14.227371 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:54:14.228550 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:54:14.230109 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:54:14.235568 jq[1442]: false Jan 29 11:54:14.233529 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:54:14.235793 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:54:14.240691 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:54:14.247533 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:54:14.249288 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:54:14.249818 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:54:14.250680 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:54:14.255371 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:54:14.264781 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:54:14.265102 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:54:14.265625 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:54:14.267997 jq[1451]: true Jan 29 11:54:14.269710 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:54:14.271505 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:54:14.287705 extend-filesystems[1443]: Found loop3 Jan 29 11:54:14.287705 extend-filesystems[1443]: Found loop4 Jan 29 11:54:14.287705 extend-filesystems[1443]: Found loop5 Jan 29 11:54:14.287705 extend-filesystems[1443]: Found sr0 Jan 29 11:54:14.287705 extend-filesystems[1443]: Found vda Jan 29 11:54:14.287705 extend-filesystems[1443]: Found vda1 Jan 29 11:54:14.287705 extend-filesystems[1443]: Found vda2 Jan 29 11:54:14.287705 extend-filesystems[1443]: Found vda3 Jan 29 11:54:14.287705 extend-filesystems[1443]: Found usr Jan 29 11:54:14.287705 extend-filesystems[1443]: Found vda4 Jan 29 11:54:14.287705 extend-filesystems[1443]: Found vda6 Jan 29 11:54:14.287705 extend-filesystems[1443]: Found vda7 Jan 29 11:54:14.287705 extend-filesystems[1443]: Found vda9 Jan 29 11:54:14.287705 extend-filesystems[1443]: Checking size of /dev/vda9 Jan 29 11:54:14.328540 extend-filesystems[1443]: Resized partition /dev/vda9 Jan 29 11:54:14.330559 tar[1456]: linux-amd64/helm Jan 29 11:54:14.331601 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:54:14.332138 update_engine[1450]: I20250129 11:54:14.302320 1450 main.cc:92] Flatcar Update Engine starting Jan 29 11:54:14.332138 update_engine[1450]: I20250129 11:54:14.305199 1450 update_check_scheduler.cc:74] Next update check in 6m55s Jan 29 11:54:14.297579 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:54:14.292189 dbus-daemon[1441]: [system] SELinux support is enabled Jan 29 11:54:14.335520 extend-filesystems[1477]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:54:14.297968 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:54:14.311195 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:54:14.340647 jq[1458]: true Jan 29 11:54:14.311527 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:54:14.313904 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:54:14.313934 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:54:14.316701 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:54:14.316731 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:54:14.328551 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:54:14.348302 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1396) Jan 29 11:54:14.328576 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:54:14.330184 systemd-logind[1449]: New seat seat0. Jan 29 11:54:14.331407 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:54:14.346738 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:54:14.349519 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:54:14.435267 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:54:14.472785 extend-filesystems[1477]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:54:14.472785 extend-filesystems[1477]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:54:14.472785 extend-filesystems[1477]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:54:14.472533 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:54:14.500910 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Jan 29 11:54:14.474365 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:54:14.541289 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:54:14.636133 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:54:14.639035 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:54:14.665895 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:54:14.876252 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:54:14.909842 systemd-networkd[1393]: eth0: Gained IPv6LL Jan 29 11:54:14.920827 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:54:14.928820 containerd[1459]: time="2025-01-29T11:54:14.928720444Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:54:14.933789 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:54:14.935546 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:54:14.941841 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:54:14.945551 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:54:14.957580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:14.962167 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:54:14.998109 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:54:14.999222 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:54:15.016474 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:54:15.027499 containerd[1459]: time="2025-01-29T11:54:15.026314553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:15.034667 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:54:15.034984 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:54:15.038021 containerd[1459]: time="2025-01-29T11:54:15.036653318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:54:15.038021 containerd[1459]: time="2025-01-29T11:54:15.036719021Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:54:15.038021 containerd[1459]: time="2025-01-29T11:54:15.036770388Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:54:15.038021 containerd[1459]: time="2025-01-29T11:54:15.037095227Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:54:15.038021 containerd[1459]: time="2025-01-29T11:54:15.037121185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:15.038021 containerd[1459]: time="2025-01-29T11:54:15.037220291Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:54:15.038021 containerd[1459]: time="2025-01-29T11:54:15.037255768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:15.038021 containerd[1459]: time="2025-01-29T11:54:15.037548767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:54:15.038021 containerd[1459]: time="2025-01-29T11:54:15.037571450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:15.038021 containerd[1459]: time="2025-01-29T11:54:15.037588802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:54:15.038021 containerd[1459]: time="2025-01-29T11:54:15.037601756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:15.036978 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:54:15.040719 containerd[1459]: time="2025-01-29T11:54:15.040655863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:15.043515 containerd[1459]: time="2025-01-29T11:54:15.041382505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:54:15.043515 containerd[1459]: time="2025-01-29T11:54:15.041568223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:54:15.043515 containerd[1459]: time="2025-01-29T11:54:15.041591938Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:54:15.043515 containerd[1459]: time="2025-01-29T11:54:15.041827940Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:54:15.043515 containerd[1459]: time="2025-01-29T11:54:15.042033456Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:54:15.058360 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:54:15.094335 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:54:15.097787 containerd[1459]: time="2025-01-29T11:54:15.097737932Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:54:15.098427 containerd[1459]: time="2025-01-29T11:54:15.098067399Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:54:15.098427 containerd[1459]: time="2025-01-29T11:54:15.098112855Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:54:15.098427 containerd[1459]: time="2025-01-29T11:54:15.098141568Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:54:15.098427 containerd[1459]: time="2025-01-29T11:54:15.098162979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:54:15.098983 containerd[1459]: time="2025-01-29T11:54:15.098466528Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:54:15.098983 containerd[1459]: time="2025-01-29T11:54:15.098903838Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:54:15.099155 containerd[1459]: time="2025-01-29T11:54:15.099119883Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:54:15.099194 containerd[1459]: time="2025-01-29T11:54:15.099156121Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:54:15.099194 containerd[1459]: time="2025-01-29T11:54:15.099181809Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:54:15.099263 containerd[1459]: time="2025-01-29T11:54:15.099202868Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:54:15.099263 containerd[1459]: time="2025-01-29T11:54:15.099223257Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:54:15.099327 containerd[1459]: time="2025-01-29T11:54:15.099260767Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:54:15.099327 containerd[1459]: time="2025-01-29T11:54:15.099286886Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:54:15.099327 containerd[1459]: time="2025-01-29T11:54:15.099307855Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:54:15.099327 containerd[1459]: time="2025-01-29T11:54:15.099326911Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:54:15.099424 containerd[1459]: time="2025-01-29T11:54:15.099344033Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:54:15.099424 containerd[1459]: time="2025-01-29T11:54:15.099366726Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:54:15.099424 containerd[1459]: time="2025-01-29T11:54:15.099401220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.099424 containerd[1459]: time="2025-01-29T11:54:15.099421498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099439903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099489947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099517699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099538187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099557533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099579144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099602758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099623677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099648043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099660136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099673100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099691715Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099716762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099731720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.100480 containerd[1459]: time="2025-01-29T11:54:15.099744875Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:54:15.101198 containerd[1459]: time="2025-01-29T11:54:15.099826979Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:54:15.101198 containerd[1459]: time="2025-01-29T11:54:15.099857476Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:54:15.101198 containerd[1459]: time="2025-01-29T11:54:15.099871893Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:54:15.101198 containerd[1459]: time="2025-01-29T11:54:15.099885128Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:54:15.101198 containerd[1459]: time="2025-01-29T11:54:15.099895758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.101198 containerd[1459]: time="2025-01-29T11:54:15.099912549Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:54:15.101198 containerd[1459]: time="2025-01-29T11:54:15.099938598Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:54:15.101198 containerd[1459]: time="2025-01-29T11:54:15.099961511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:54:15.102435 containerd[1459]: time="2025-01-29T11:54:15.100468051Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:54:15.102435 containerd[1459]: time="2025-01-29T11:54:15.100544925Z" level=info msg="Connect containerd service" Jan 29 11:54:15.102435 containerd[1459]: time="2025-01-29T11:54:15.100604667Z" level=info msg="using legacy CRI server" Jan 29 11:54:15.102435 containerd[1459]: time="2025-01-29T11:54:15.100614375Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:54:15.102435 containerd[1459]: time="2025-01-29T11:54:15.100802858Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:54:15.102435 containerd[1459]: time="2025-01-29T11:54:15.101575547Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:54:15.102435 containerd[1459]: time="2025-01-29T11:54:15.102151657Z" level=info msg="Start subscribing containerd event" Jan 29 11:54:15.102435 containerd[1459]: time="2025-01-29T11:54:15.102327968Z" level=info msg="Start recovering state" Jan 29 11:54:15.102435 containerd[1459]: time="2025-01-29T11:54:15.102252687Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:54:15.102847 containerd[1459]: time="2025-01-29T11:54:15.102495242Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:54:15.106867 containerd[1459]: time="2025-01-29T11:54:15.105067465Z" level=info msg="Start event monitor" Jan 29 11:54:15.106867 containerd[1459]: time="2025-01-29T11:54:15.105111056Z" level=info msg="Start snapshots syncer" Jan 29 11:54:15.106867 containerd[1459]: time="2025-01-29T11:54:15.105138217Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:54:15.106867 containerd[1459]: time="2025-01-29T11:54:15.105156522Z" level=info msg="Start streaming server" Jan 29 11:54:15.106867 containerd[1459]: time="2025-01-29T11:54:15.105310671Z" level=info msg="containerd successfully booted in 0.178150s" Jan 29 11:54:15.106572 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:54:15.120816 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:54:15.122442 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:54:15.123925 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:54:15.302817 tar[1456]: linux-amd64/LICENSE Jan 29 11:54:15.303069 tar[1456]: linux-amd64/README.md Jan 29 11:54:15.324479 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:54:16.585178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:16.586928 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:54:16.588360 systemd[1]: Startup finished in 1.190s (kernel) + 6.310s (initrd) + 5.423s (userspace) = 12.925s. Jan 29 11:54:16.591233 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:54:17.163814 kubelet[1555]: E0129 11:54:17.163690 1555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:54:17.167805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:54:17.168025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:54:17.168391 systemd[1]: kubelet.service: Consumed 1.983s CPU time. Jan 29 11:54:18.688466 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:54:18.690067 systemd[1]: Started sshd@0-10.0.0.99:22-10.0.0.1:54488.service - OpenSSH per-connection server daemon (10.0.0.1:54488). Jan 29 11:54:18.745054 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 54488 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:54:18.747803 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:18.758678 systemd-logind[1449]: New session 1 of user core. Jan 29 11:54:18.760257 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:54:18.776584 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:54:18.791132 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:54:18.794091 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:54:18.803640 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:54:18.946166 systemd[1572]: Queued start job for default target default.target. Jan 29 11:54:18.955749 systemd[1572]: Created slice app.slice - User Application Slice. Jan 29 11:54:18.955779 systemd[1572]: Reached target paths.target - Paths. Jan 29 11:54:18.955794 systemd[1572]: Reached target timers.target - Timers. Jan 29 11:54:18.957676 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:54:18.971640 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:54:18.971900 systemd[1572]: Reached target sockets.target - Sockets. Jan 29 11:54:18.971931 systemd[1572]: Reached target basic.target - Basic System. Jan 29 11:54:18.972010 systemd[1572]: Reached target default.target - Main User Target. Jan 29 11:54:18.972065 systemd[1572]: Startup finished in 161ms. Jan 29 11:54:18.972219 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:54:18.974452 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:54:19.040655 systemd[1]: Started sshd@1-10.0.0.99:22-10.0.0.1:54496.service - OpenSSH per-connection server daemon (10.0.0.1:54496). Jan 29 11:54:19.084068 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 54496 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:54:19.086050 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:19.091308 systemd-logind[1449]: New session 2 of user core. Jan 29 11:54:19.101426 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:54:19.162069 sshd[1583]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:19.175509 systemd[1]: sshd@1-10.0.0.99:22-10.0.0.1:54496.service: Deactivated successfully. Jan 29 11:54:19.177544 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:54:19.179294 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:54:19.188514 systemd[1]: Started sshd@2-10.0.0.99:22-10.0.0.1:54512.service - OpenSSH per-connection server daemon (10.0.0.1:54512). Jan 29 11:54:19.189630 systemd-logind[1449]: Removed session 2. Jan 29 11:54:19.226422 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 54512 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:54:19.228904 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:19.234689 systemd-logind[1449]: New session 3 of user core. Jan 29 11:54:19.246526 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:54:19.298863 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:19.310528 systemd[1]: sshd@2-10.0.0.99:22-10.0.0.1:54512.service: Deactivated successfully. Jan 29 11:54:19.312481 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:54:19.314375 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:54:19.327747 systemd[1]: Started sshd@3-10.0.0.99:22-10.0.0.1:54518.service - OpenSSH per-connection server daemon (10.0.0.1:54518). Jan 29 11:54:19.329037 systemd-logind[1449]: Removed session 3. Jan 29 11:54:19.365432 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 54518 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:54:19.368020 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:19.373477 systemd-logind[1449]: New session 4 of user core. Jan 29 11:54:19.383604 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:54:19.442826 sshd[1597]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:19.456795 systemd[1]: sshd@3-10.0.0.99:22-10.0.0.1:54518.service: Deactivated successfully. Jan 29 11:54:19.459179 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:54:19.461208 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:54:19.473791 systemd[1]: Started sshd@4-10.0.0.99:22-10.0.0.1:54534.service - OpenSSH per-connection server daemon (10.0.0.1:54534). Jan 29 11:54:19.475154 systemd-logind[1449]: Removed session 4. Jan 29 11:54:19.509213 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 54534 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:54:19.511340 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:19.515648 systemd-logind[1449]: New session 5 of user core. Jan 29 11:54:19.525488 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:54:19.587232 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:54:19.587683 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:54:20.259873 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:54:20.260500 (dockerd)[1625]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:54:21.213681 dockerd[1625]: time="2025-01-29T11:54:21.213505124Z" level=info msg="Starting up" Jan 29 11:54:21.755622 dockerd[1625]: time="2025-01-29T11:54:21.755539027Z" level=info msg="Loading containers: start." Jan 29 11:54:21.910289 kernel: Initializing XFRM netlink socket Jan 29 11:54:22.009020 systemd-networkd[1393]: docker0: Link UP Jan 29 11:54:22.032506 dockerd[1625]: time="2025-01-29T11:54:22.032464682Z" level=info msg="Loading containers: done." Jan 29 11:54:22.053595 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1804621782-merged.mount: Deactivated successfully. Jan 29 11:54:22.055869 dockerd[1625]: time="2025-01-29T11:54:22.055795628Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:54:22.055998 dockerd[1625]: time="2025-01-29T11:54:22.055960507Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 11:54:22.056144 dockerd[1625]: time="2025-01-29T11:54:22.056118594Z" level=info msg="Daemon has completed initialization" Jan 29 11:54:22.106098 dockerd[1625]: time="2025-01-29T11:54:22.105955007Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:54:22.106163 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:54:23.415308 containerd[1459]: time="2025-01-29T11:54:23.415255295Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:54:24.523962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207240314.mount: Deactivated successfully. Jan 29 11:54:25.482997 containerd[1459]: time="2025-01-29T11:54:25.482919244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:25.484066 containerd[1459]: time="2025-01-29T11:54:25.484012854Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 11:54:25.485683 containerd[1459]: time="2025-01-29T11:54:25.485632902Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:25.489030 containerd[1459]: time="2025-01-29T11:54:25.488952346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:25.490376 containerd[1459]: time="2025-01-29T11:54:25.490343294Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.075035271s" Jan 29 11:54:25.490440 containerd[1459]: time="2025-01-29T11:54:25.490378460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 11:54:25.492287 containerd[1459]: time="2025-01-29T11:54:25.492233318Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:54:27.418602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:54:27.428651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:27.783411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:27.789396 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:54:27.893842 kubelet[1838]: E0129 11:54:27.893712 1838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:54:27.902455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:54:27.902788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:54:28.158678 containerd[1459]: time="2025-01-29T11:54:28.158469780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:28.159361 containerd[1459]: time="2025-01-29T11:54:28.159298725Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 11:54:28.160597 containerd[1459]: time="2025-01-29T11:54:28.160556954Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:28.163712 containerd[1459]: time="2025-01-29T11:54:28.163650003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:28.165006 containerd[1459]: time="2025-01-29T11:54:28.164954479Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 2.672656951s" Jan 29 11:54:28.165074 containerd[1459]: time="2025-01-29T11:54:28.165005725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 11:54:28.165826 containerd[1459]: time="2025-01-29T11:54:28.165775018Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:54:30.100635 containerd[1459]: time="2025-01-29T11:54:30.100522839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:30.101483 containerd[1459]: time="2025-01-29T11:54:30.101422907Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 11:54:30.104467 containerd[1459]: time="2025-01-29T11:54:30.104420528Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:30.107668 containerd[1459]: time="2025-01-29T11:54:30.107631488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:30.108830 containerd[1459]: time="2025-01-29T11:54:30.108784480Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.942972864s" Jan 29 11:54:30.108830 containerd[1459]: time="2025-01-29T11:54:30.108824956Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 11:54:30.109406 containerd[1459]: time="2025-01-29T11:54:30.109363696Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:54:32.161860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2144627270.mount: Deactivated successfully. Jan 29 11:54:33.270503 containerd[1459]: time="2025-01-29T11:54:33.270393045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:33.295079 containerd[1459]: time="2025-01-29T11:54:33.294946423Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 11:54:33.336045 containerd[1459]: time="2025-01-29T11:54:33.335958370Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:33.359120 containerd[1459]: time="2025-01-29T11:54:33.359029519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:33.359680 containerd[1459]: time="2025-01-29T11:54:33.359631888Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 3.250230361s" Jan 29 11:54:33.359728 containerd[1459]: time="2025-01-29T11:54:33.359682723Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:54:33.360354 containerd[1459]: time="2025-01-29T11:54:33.360325148Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:54:34.172198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1774389195.mount: Deactivated successfully. Jan 29 11:54:35.293133 containerd[1459]: time="2025-01-29T11:54:35.293057661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:35.294561 containerd[1459]: time="2025-01-29T11:54:35.294512970Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:54:35.296832 containerd[1459]: time="2025-01-29T11:54:35.296804717Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:35.300296 containerd[1459]: time="2025-01-29T11:54:35.300233275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:35.301525 containerd[1459]: time="2025-01-29T11:54:35.301459925Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.941102887s" Jan 29 11:54:35.301525 containerd[1459]: time="2025-01-29T11:54:35.301521350Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:54:35.302090 containerd[1459]: time="2025-01-29T11:54:35.302062635Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:54:36.272851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769346301.mount: Deactivated successfully. Jan 29 11:54:36.281338 containerd[1459]: time="2025-01-29T11:54:36.281261413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:36.282254 containerd[1459]: time="2025-01-29T11:54:36.282126014Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 11:54:36.283646 containerd[1459]: time="2025-01-29T11:54:36.283608073Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:36.286474 containerd[1459]: time="2025-01-29T11:54:36.286422440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:36.287476 containerd[1459]: time="2025-01-29T11:54:36.287396115Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 985.297853ms" Jan 29 11:54:36.287544 containerd[1459]: time="2025-01-29T11:54:36.287474282Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 11:54:36.288204 containerd[1459]: time="2025-01-29T11:54:36.288127917Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:54:36.847083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376308841.mount: Deactivated successfully. Jan 29 11:54:38.152903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:54:38.163455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:38.404772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:38.411983 (kubelet)[1967]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:54:38.580634 kubelet[1967]: E0129 11:54:38.580502 1967 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:54:38.586519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:54:38.586917 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:54:39.704605 containerd[1459]: time="2025-01-29T11:54:39.704517787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:39.705633 containerd[1459]: time="2025-01-29T11:54:39.705556074Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 11:54:39.706825 containerd[1459]: time="2025-01-29T11:54:39.706782584Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:39.710325 containerd[1459]: time="2025-01-29T11:54:39.710289279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:39.711655 containerd[1459]: time="2025-01-29T11:54:39.711609835Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.423442233s" Jan 29 11:54:39.711655 containerd[1459]: time="2025-01-29T11:54:39.711648608Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 11:54:42.038093 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:42.047476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:42.073066 systemd[1]: Reloading requested from client PID 2006 ('systemctl') (unit session-5.scope)... Jan 29 11:54:42.073089 systemd[1]: Reloading... Jan 29 11:54:42.170403 zram_generator::config[2045]: No configuration found. Jan 29 11:54:42.418529 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:54:42.498998 systemd[1]: Reloading finished in 425 ms. Jan 29 11:54:42.557448 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:54:42.557561 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:54:42.557859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:42.559746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:42.740653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:42.746545 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:54:42.787428 kubelet[2094]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:54:42.787428 kubelet[2094]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:54:42.787428 kubelet[2094]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:54:42.787971 kubelet[2094]: I0129 11:54:42.787482 2094 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:54:42.971351 kubelet[2094]: I0129 11:54:42.971269 2094 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:54:42.971351 kubelet[2094]: I0129 11:54:42.971329 2094 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:54:42.971750 kubelet[2094]: I0129 11:54:42.971717 2094 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:54:43.003750 kubelet[2094]: I0129 11:54:43.003361 2094 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:54:43.003917 kubelet[2094]: E0129 11:54:43.003681 2094 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:43.016615 kubelet[2094]: E0129 11:54:43.016540 2094 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:54:43.016615 kubelet[2094]: I0129 11:54:43.016606 2094 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:54:43.025640 kubelet[2094]: I0129 11:54:43.025547 2094 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:54:43.027490 kubelet[2094]: I0129 11:54:43.027426 2094 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:54:43.027759 kubelet[2094]: I0129 11:54:43.027694 2094 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:54:43.028024 kubelet[2094]: I0129 11:54:43.027745 2094 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:54:43.028145 kubelet[2094]: I0129 11:54:43.028031 2094 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:54:43.028145 kubelet[2094]: I0129 11:54:43.028046 2094 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:54:43.028386 kubelet[2094]: I0129 11:54:43.028351 2094 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:54:43.030699 kubelet[2094]: I0129 11:54:43.030658 2094 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:54:43.030699 kubelet[2094]: I0129 11:54:43.030689 2094 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:54:43.030801 kubelet[2094]: I0129 11:54:43.030749 2094 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:54:43.030801 kubelet[2094]: I0129 11:54:43.030780 2094 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:54:43.037780 kubelet[2094]: W0129 11:54:43.037694 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 29 11:54:43.038049 kubelet[2094]: E0129 11:54:43.037992 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:43.039105 kubelet[2094]: I0129 11:54:43.039054 2094 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:54:43.039411 kubelet[2094]: W0129 11:54:43.039328 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 29 11:54:43.039411 kubelet[2094]: E0129 11:54:43.039392 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:43.041372 kubelet[2094]: I0129 11:54:43.041297 2094 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:54:43.041481 kubelet[2094]: W0129 11:54:43.041440 2094 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:54:43.042460 kubelet[2094]: I0129 11:54:43.042415 2094 server.go:1269] "Started kubelet" Jan 29 11:54:43.045274 kubelet[2094]: I0129 11:54:43.043052 2094 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:54:43.045274 kubelet[2094]: I0129 11:54:43.043654 2094 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:54:43.045274 kubelet[2094]: I0129 11:54:43.043653 2094 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:54:43.045274 kubelet[2094]: I0129 11:54:43.043991 2094 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:54:43.045274 kubelet[2094]: I0129 11:54:43.045195 2094 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:54:43.046165 kubelet[2094]: I0129 11:54:43.046126 2094 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:54:43.050857 kubelet[2094]: E0129 11:54:43.050811 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:43.050956 kubelet[2094]: I0129 11:54:43.050885 2094 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:54:43.051193 kubelet[2094]: E0129 11:54:43.051168 2094 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:54:43.051529 kubelet[2094]: I0129 11:54:43.051494 2094 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:54:43.051663 kubelet[2094]: I0129 11:54:43.051635 2094 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:54:43.052772 kubelet[2094]: E0129 11:54:43.052320 2094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="200ms" Jan 29 11:54:43.052772 kubelet[2094]: W0129 11:54:43.052426 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 29 11:54:43.052772 kubelet[2094]: E0129 11:54:43.052486 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:43.054129 kubelet[2094]: I0129 11:54:43.053414 2094 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:54:43.054129 kubelet[2094]: I0129 11:54:43.053513 2094 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:54:43.055464 kubelet[2094]: E0129 11:54:43.052589 2094 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f27bf217d0802 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:54:43.042379778 +0000 UTC m=+0.291395438,LastTimestamp:2025-01-29 11:54:43.042379778 +0000 UTC m=+0.291395438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:54:43.056157 kubelet[2094]: I0129 11:54:43.055857 2094 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:54:43.070615 kubelet[2094]: I0129 11:54:43.070544 2094 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:54:43.073137 kubelet[2094]: I0129 11:54:43.072742 2094 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:54:43.073137 kubelet[2094]: I0129 11:54:43.072832 2094 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:54:43.073137 kubelet[2094]: I0129 11:54:43.073088 2094 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:54:43.073993 kubelet[2094]: W0129 11:54:43.073609 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 29 11:54:43.073993 kubelet[2094]: E0129 11:54:43.073686 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:43.074211 kubelet[2094]: E0129 11:54:43.074136 2094 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:54:43.075997 kubelet[2094]: I0129 11:54:43.075972 2094 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:54:43.075997 kubelet[2094]: I0129 11:54:43.075991 2094 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:54:43.076134 kubelet[2094]: I0129 11:54:43.076017 2094 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:54:43.152010 kubelet[2094]: E0129 11:54:43.151946 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:43.175404 kubelet[2094]: E0129 11:54:43.175322 2094 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:54:43.252798 kubelet[2094]: E0129 11:54:43.252718 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:43.253205 kubelet[2094]: E0129 11:54:43.253150 2094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="400ms" Jan 29 11:54:43.353468 kubelet[2094]: E0129 11:54:43.353299 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:43.375489 kubelet[2094]: E0129 11:54:43.375432 2094 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:54:43.451070 kubelet[2094]: I0129 11:54:43.450997 2094 policy_none.go:49] "None policy: Start" Jan 29 11:54:43.451990 kubelet[2094]: I0129 11:54:43.451944 2094 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:54:43.451990 kubelet[2094]: I0129 11:54:43.451974 2094 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:54:43.454388 kubelet[2094]: E0129 11:54:43.454343 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:43.462487 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:54:43.482736 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:54:43.486180 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:54:43.496406 kubelet[2094]: I0129 11:54:43.496356 2094 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:54:43.496707 kubelet[2094]: I0129 11:54:43.496681 2094 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:54:43.496741 kubelet[2094]: I0129 11:54:43.496703 2094 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:54:43.497057 kubelet[2094]: I0129 11:54:43.497029 2094 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:54:43.498321 kubelet[2094]: E0129 11:54:43.498287 2094 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:54:43.598920 kubelet[2094]: I0129 11:54:43.598850 2094 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:54:43.599375 kubelet[2094]: E0129 11:54:43.599337 2094 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 29 11:54:43.654365 kubelet[2094]: E0129 11:54:43.654153 2094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="800ms" Jan 29 11:54:43.786707 systemd[1]: Created slice kubepods-burstable-pod2941555fc0d3065a629d98da5724b4ca.slice - libcontainer container kubepods-burstable-pod2941555fc0d3065a629d98da5724b4ca.slice. Jan 29 11:54:43.801603 kubelet[2094]: I0129 11:54:43.801548 2094 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:54:43.802081 kubelet[2094]: E0129 11:54:43.801907 2094 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 29 11:54:43.804327 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 11:54:43.817979 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 11:54:43.856539 kubelet[2094]: I0129 11:54:43.856465 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2941555fc0d3065a629d98da5724b4ca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2941555fc0d3065a629d98da5724b4ca\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:43.856539 kubelet[2094]: I0129 11:54:43.856550 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:43.856752 kubelet[2094]: I0129 11:54:43.856658 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:43.856752 kubelet[2094]: I0129 11:54:43.856720 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:43.856752 kubelet[2094]: I0129 11:54:43.856750 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:54:43.856842 kubelet[2094]: I0129 11:54:43.856773 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2941555fc0d3065a629d98da5724b4ca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2941555fc0d3065a629d98da5724b4ca\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:43.856842 kubelet[2094]: I0129 11:54:43.856790 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2941555fc0d3065a629d98da5724b4ca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2941555fc0d3065a629d98da5724b4ca\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:43.856842 kubelet[2094]: I0129 11:54:43.856805 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:43.856842 kubelet[2094]: I0129 11:54:43.856821 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:44.101775 kubelet[2094]: E0129 11:54:44.101599 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:44.102709 containerd[1459]: time="2025-01-29T11:54:44.102604489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2941555fc0d3065a629d98da5724b4ca,Namespace:kube-system,Attempt:0,}" Jan 29 11:54:44.115848 kubelet[2094]: E0129 11:54:44.115787 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:44.116564 containerd[1459]: time="2025-01-29T11:54:44.116483813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 11:54:44.120824 kubelet[2094]: E0129 11:54:44.120763 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:44.121457 containerd[1459]: time="2025-01-29T11:54:44.121409879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 11:54:44.152062 kubelet[2094]: W0129 11:54:44.152009 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 29 11:54:44.152062 kubelet[2094]: E0129 11:54:44.152057 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:44.204145 kubelet[2094]: I0129 11:54:44.204048 2094 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:54:44.204682 kubelet[2094]: E0129 11:54:44.204577 2094 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 29 11:54:44.261761 kubelet[2094]: W0129 11:54:44.261652 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 29 11:54:44.261761 kubelet[2094]: E0129 11:54:44.261763 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:44.300912 kubelet[2094]: W0129 11:54:44.300788 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 29 11:54:44.300912 kubelet[2094]: E0129 11:54:44.300896 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:44.455896 kubelet[2094]: E0129 11:54:44.455718 2094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="1.6s" Jan 29 11:54:44.573877 kubelet[2094]: W0129 11:54:44.573813 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jan 29 11:54:44.573877 kubelet[2094]: E0129 11:54:44.573875 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:44.633165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197878938.mount: Deactivated successfully. Jan 29 11:54:44.643687 containerd[1459]: time="2025-01-29T11:54:44.643623479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:54:44.644917 containerd[1459]: time="2025-01-29T11:54:44.644871810Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:54:44.645961 containerd[1459]: time="2025-01-29T11:54:44.645926177Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:54:44.646509 containerd[1459]: time="2025-01-29T11:54:44.646408461Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:54:44.647575 containerd[1459]: time="2025-01-29T11:54:44.647521308Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:54:44.648480 containerd[1459]: time="2025-01-29T11:54:44.648431334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:54:44.649612 containerd[1459]: time="2025-01-29T11:54:44.649560411Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:54:44.654426 containerd[1459]: time="2025-01-29T11:54:44.654374217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:54:44.655532 containerd[1459]: time="2025-01-29T11:54:44.655487985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 552.743193ms" Jan 29 11:54:44.658853 containerd[1459]: time="2025-01-29T11:54:44.658813290Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 537.297183ms" Jan 29 11:54:44.659669 containerd[1459]: time="2025-01-29T11:54:44.659630653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 543.05595ms" Jan 29 11:54:44.762795 containerd[1459]: time="2025-01-29T11:54:44.761784872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:44.762795 containerd[1459]: time="2025-01-29T11:54:44.762619627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:44.762795 containerd[1459]: time="2025-01-29T11:54:44.762637571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:44.762996 containerd[1459]: time="2025-01-29T11:54:44.762749901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:44.866409 systemd[1]: Started cri-containerd-0f13c2b8fad10cec444228438a0470da2856761dfd8519e2430964aa3f228249.scope - libcontainer container 0f13c2b8fad10cec444228438a0470da2856761dfd8519e2430964aa3f228249. Jan 29 11:54:44.904227 containerd[1459]: time="2025-01-29T11:54:44.904168261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2941555fc0d3065a629d98da5724b4ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f13c2b8fad10cec444228438a0470da2856761dfd8519e2430964aa3f228249\"" Jan 29 11:54:44.905442 kubelet[2094]: E0129 11:54:44.905407 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:44.907738 containerd[1459]: time="2025-01-29T11:54:44.907704842Z" level=info msg="CreateContainer within sandbox \"0f13c2b8fad10cec444228438a0470da2856761dfd8519e2430964aa3f228249\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:54:45.006791 kubelet[2094]: I0129 11:54:45.006742 2094 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:54:45.007969 kubelet[2094]: E0129 11:54:45.007098 2094 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 29 11:54:45.017625 containerd[1459]: time="2025-01-29T11:54:45.017447549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:45.017787 containerd[1459]: time="2025-01-29T11:54:45.017532549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:45.017787 containerd[1459]: time="2025-01-29T11:54:45.017549460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:45.017787 containerd[1459]: time="2025-01-29T11:54:45.017703529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:45.019853 containerd[1459]: time="2025-01-29T11:54:45.019736982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:45.020760 containerd[1459]: time="2025-01-29T11:54:45.020706871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:45.021148 containerd[1459]: time="2025-01-29T11:54:45.020745383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:45.021148 containerd[1459]: time="2025-01-29T11:54:45.020935750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:45.028865 containerd[1459]: time="2025-01-29T11:54:45.028799113Z" level=info msg="CreateContainer within sandbox \"0f13c2b8fad10cec444228438a0470da2856761dfd8519e2430964aa3f228249\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3fc1f18106c1978383672892821310da6c41924c8ed497b0c5482fd69f5ae887\"" Jan 29 11:54:45.029824 containerd[1459]: time="2025-01-29T11:54:45.029790903Z" level=info msg="StartContainer for \"3fc1f18106c1978383672892821310da6c41924c8ed497b0c5482fd69f5ae887\"" Jan 29 11:54:45.044478 systemd[1]: Started cri-containerd-50eac3a6ff83fbff82123ff617459cd2e41b43cbc7ad0c4b82593f0534da3c82.scope - libcontainer container 50eac3a6ff83fbff82123ff617459cd2e41b43cbc7ad0c4b82593f0534da3c82. Jan 29 11:54:45.049577 systemd[1]: Started cri-containerd-dd5cc47e1c5ad8beefc6b6c37e995ced876c93ded119a615305536b0e5c0f2aa.scope - libcontainer container dd5cc47e1c5ad8beefc6b6c37e995ced876c93ded119a615305536b0e5c0f2aa. Jan 29 11:54:45.064402 kubelet[2094]: E0129 11:54:45.064328 2094 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:54:45.076466 systemd[1]: Started cri-containerd-3fc1f18106c1978383672892821310da6c41924c8ed497b0c5482fd69f5ae887.scope - libcontainer container 3fc1f18106c1978383672892821310da6c41924c8ed497b0c5482fd69f5ae887. Jan 29 11:54:45.103900 containerd[1459]: time="2025-01-29T11:54:45.103617798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"50eac3a6ff83fbff82123ff617459cd2e41b43cbc7ad0c4b82593f0534da3c82\"" Jan 29 11:54:45.104471 kubelet[2094]: E0129 11:54:45.104442 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:45.106229 containerd[1459]: time="2025-01-29T11:54:45.106172328Z" level=info msg="CreateContainer within sandbox \"50eac3a6ff83fbff82123ff617459cd2e41b43cbc7ad0c4b82593f0534da3c82\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:54:45.129008 containerd[1459]: time="2025-01-29T11:54:45.128946770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd5cc47e1c5ad8beefc6b6c37e995ced876c93ded119a615305536b0e5c0f2aa\"" Jan 29 11:54:45.130190 kubelet[2094]: E0129 11:54:45.130152 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:45.132381 containerd[1459]: time="2025-01-29T11:54:45.132348108Z" level=info msg="CreateContainer within sandbox \"dd5cc47e1c5ad8beefc6b6c37e995ced876c93ded119a615305536b0e5c0f2aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:54:45.134299 containerd[1459]: time="2025-01-29T11:54:45.134226059Z" level=info msg="CreateContainer within sandbox \"50eac3a6ff83fbff82123ff617459cd2e41b43cbc7ad0c4b82593f0534da3c82\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3cf1dcc2d555c157240c3305695123c3a4796bd961f664dc4e6ec58d447d6bb6\"" Jan 29 11:54:45.134378 containerd[1459]: time="2025-01-29T11:54:45.134347978Z" level=info msg="StartContainer for \"3fc1f18106c1978383672892821310da6c41924c8ed497b0c5482fd69f5ae887\" returns successfully" Jan 29 11:54:45.136332 containerd[1459]: time="2025-01-29T11:54:45.135363632Z" level=info msg="StartContainer for \"3cf1dcc2d555c157240c3305695123c3a4796bd961f664dc4e6ec58d447d6bb6\"" Jan 29 11:54:45.148547 containerd[1459]: time="2025-01-29T11:54:45.148479504Z" level=info msg="CreateContainer within sandbox \"dd5cc47e1c5ad8beefc6b6c37e995ced876c93ded119a615305536b0e5c0f2aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df25fa840fe25a37debb891f9e85155c6adf81c0d39914239b9feddde1202894\"" Jan 29 11:54:45.150519 containerd[1459]: time="2025-01-29T11:54:45.149310753Z" level=info msg="StartContainer for \"df25fa840fe25a37debb891f9e85155c6adf81c0d39914239b9feddde1202894\"" Jan 29 11:54:45.168675 systemd[1]: Started cri-containerd-3cf1dcc2d555c157240c3305695123c3a4796bd961f664dc4e6ec58d447d6bb6.scope - libcontainer container 3cf1dcc2d555c157240c3305695123c3a4796bd961f664dc4e6ec58d447d6bb6. Jan 29 11:54:45.183498 systemd[1]: Started cri-containerd-df25fa840fe25a37debb891f9e85155c6adf81c0d39914239b9feddde1202894.scope - libcontainer container df25fa840fe25a37debb891f9e85155c6adf81c0d39914239b9feddde1202894. Jan 29 11:54:45.232550 containerd[1459]: time="2025-01-29T11:54:45.232482109Z" level=info msg="StartContainer for \"3cf1dcc2d555c157240c3305695123c3a4796bd961f664dc4e6ec58d447d6bb6\" returns successfully" Jan 29 11:54:45.232739 containerd[1459]: time="2025-01-29T11:54:45.232577949Z" level=info msg="StartContainer for \"df25fa840fe25a37debb891f9e85155c6adf81c0d39914239b9feddde1202894\" returns successfully" Jan 29 11:54:46.094883 kubelet[2094]: E0129 11:54:46.094833 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:46.096609 kubelet[2094]: E0129 11:54:46.096576 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:46.097847 kubelet[2094]: E0129 11:54:46.097812 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:46.240498 kubelet[2094]: E0129 11:54:46.240441 2094 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:54:46.609126 kubelet[2094]: I0129 11:54:46.609078 2094 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:54:46.736586 kubelet[2094]: I0129 11:54:46.736500 2094 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:54:46.736586 kubelet[2094]: E0129 11:54:46.736567 2094 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 11:54:46.803916 kubelet[2094]: E0129 11:54:46.803841 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:46.904173 kubelet[2094]: E0129 11:54:46.903964 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:47.004982 kubelet[2094]: E0129 11:54:47.004904 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:47.100116 kubelet[2094]: E0129 11:54:47.100066 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:47.105277 kubelet[2094]: E0129 11:54:47.105200 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:47.206110 kubelet[2094]: E0129 11:54:47.205909 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:54:48.041470 kubelet[2094]: I0129 11:54:48.041423 2094 apiserver.go:52] "Watching apiserver" Jan 29 11:54:48.052547 kubelet[2094]: I0129 11:54:48.052488 2094 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:54:48.111503 kubelet[2094]: E0129 11:54:48.111458 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:48.192053 systemd[1]: Reloading requested from client PID 2372 ('systemctl') (unit session-5.scope)... Jan 29 11:54:48.192071 systemd[1]: Reloading... Jan 29 11:54:48.277292 zram_generator::config[2414]: No configuration found. Jan 29 11:54:48.396493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:54:48.500284 systemd[1]: Reloading finished in 307 ms. Jan 29 11:54:48.545059 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:48.571518 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:54:48.571906 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:48.582557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:54:48.741189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:54:48.752590 (kubelet)[2456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:54:48.790863 kubelet[2456]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:54:48.790863 kubelet[2456]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:54:48.790863 kubelet[2456]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:54:48.791673 kubelet[2456]: I0129 11:54:48.790912 2456 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:54:48.799870 kubelet[2456]: I0129 11:54:48.799812 2456 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:54:48.799870 kubelet[2456]: I0129 11:54:48.799855 2456 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:54:48.800235 kubelet[2456]: I0129 11:54:48.800201 2456 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:54:48.801780 kubelet[2456]: I0129 11:54:48.801750 2456 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:54:48.804047 kubelet[2456]: I0129 11:54:48.803998 2456 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:54:48.807192 kubelet[2456]: E0129 11:54:48.807161 2456 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:54:48.807298 kubelet[2456]: I0129 11:54:48.807203 2456 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:54:48.813666 kubelet[2456]: I0129 11:54:48.813626 2456 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:54:48.813868 kubelet[2456]: I0129 11:54:48.813829 2456 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:54:48.814058 kubelet[2456]: I0129 11:54:48.813985 2456 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:54:48.814217 kubelet[2456]: I0129 11:54:48.814038 2456 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:54:48.814343 kubelet[2456]: I0129 11:54:48.814224 2456 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:54:48.814343 kubelet[2456]: I0129 11:54:48.814233 2456 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:54:48.814343 kubelet[2456]: I0129 11:54:48.814307 2456 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:54:48.814481 kubelet[2456]: I0129 11:54:48.814447 2456 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:54:48.814481 kubelet[2456]: I0129 11:54:48.814472 2456 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:54:48.814543 kubelet[2456]: I0129 11:54:48.814507 2456 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:54:48.814543 kubelet[2456]: I0129 11:54:48.814523 2456 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:54:48.819780 kubelet[2456]: I0129 11:54:48.817703 2456 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:54:48.819780 kubelet[2456]: I0129 11:54:48.818176 2456 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:54:48.822729 kubelet[2456]: I0129 11:54:48.822693 2456 server.go:1269] "Started kubelet" Jan 29 11:54:48.823099 kubelet[2456]: I0129 11:54:48.823051 2456 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:54:48.823187 kubelet[2456]: I0129 11:54:48.823130 2456 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:54:48.823566 kubelet[2456]: I0129 11:54:48.823539 2456 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:54:48.824372 kubelet[2456]: I0129 11:54:48.824336 2456 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:54:48.827398 kubelet[2456]: I0129 11:54:48.826670 2456 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:54:48.827398 kubelet[2456]: I0129 11:54:48.827036 2456 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:54:48.828059 kubelet[2456]: I0129 11:54:48.827990 2456 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:54:48.828309 kubelet[2456]: I0129 11:54:48.828143 2456 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:54:48.828568 kubelet[2456]: I0129 11:54:48.828348 2456 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:54:48.832175 kubelet[2456]: I0129 11:54:48.832131 2456 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:54:48.833149 kubelet[2456]: I0129 11:54:48.832927 2456 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:54:48.835408 kubelet[2456]: E0129 11:54:48.835277 2456 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:54:48.835408 kubelet[2456]: I0129 11:54:48.835393 2456 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:54:48.848807 kubelet[2456]: I0129 11:54:48.848467 2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:54:48.850929 kubelet[2456]: I0129 11:54:48.850896 2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:54:48.851001 kubelet[2456]: I0129 11:54:48.850961 2456 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:54:48.851001 kubelet[2456]: I0129 11:54:48.850984 2456 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:54:48.851087 kubelet[2456]: E0129 11:54:48.851060 2456 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:54:48.877704 kubelet[2456]: I0129 11:54:48.877679 2456 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:54:48.877848 kubelet[2456]: I0129 11:54:48.877835 2456 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:54:48.877919 kubelet[2456]: I0129 11:54:48.877909 2456 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:54:48.878145 kubelet[2456]: I0129 11:54:48.878127 2456 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:54:48.878221 kubelet[2456]: I0129 11:54:48.878197 2456 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:54:48.878371 kubelet[2456]: I0129 11:54:48.878303 2456 policy_none.go:49] "None policy: Start" Jan 29 11:54:48.878969 kubelet[2456]: I0129 11:54:48.878940 2456 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:54:48.879015 kubelet[2456]: I0129 11:54:48.878974 2456 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:54:48.879187 kubelet[2456]: I0129 11:54:48.879163 2456 state_mem.go:75] "Updated machine memory state" Jan 29 11:54:48.884018 kubelet[2456]: I0129 11:54:48.883946 2456 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:54:48.884248 kubelet[2456]: I0129 11:54:48.884216 2456 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:54:48.884299 kubelet[2456]: I0129 11:54:48.884256 2456 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:54:48.884733 kubelet[2456]: I0129 11:54:48.884434 2456 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:54:48.959914 kubelet[2456]: E0129 11:54:48.959869 2456 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:48.994636 kubelet[2456]: I0129 11:54:48.993687 2456 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:54:49.000839 kubelet[2456]: I0129 11:54:49.000799 2456 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 11:54:49.000998 kubelet[2456]: I0129 11:54:49.000931 2456 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:54:49.029557 kubelet[2456]: I0129 11:54:49.029498 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2941555fc0d3065a629d98da5724b4ca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2941555fc0d3065a629d98da5724b4ca\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:49.029557 kubelet[2456]: I0129 11:54:49.029551 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:49.029557 kubelet[2456]: I0129 11:54:49.029572 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:49.029817 kubelet[2456]: I0129 11:54:49.029636 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:49.029817 kubelet[2456]: I0129 11:54:49.029711 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:54:49.029817 kubelet[2456]: I0129 11:54:49.029737 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2941555fc0d3065a629d98da5724b4ca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2941555fc0d3065a629d98da5724b4ca\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:49.029817 kubelet[2456]: I0129 11:54:49.029759 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2941555fc0d3065a629d98da5724b4ca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2941555fc0d3065a629d98da5724b4ca\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:54:49.029817 kubelet[2456]: I0129 11:54:49.029780 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:49.029948 kubelet[2456]: I0129 11:54:49.029801 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:54:49.259142 kubelet[2456]: E0129 11:54:49.258883 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:49.259949 kubelet[2456]: E0129 11:54:49.259892 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:49.260946 kubelet[2456]: E0129 11:54:49.260869 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:49.815013 kubelet[2456]: I0129 11:54:49.814940 2456 apiserver.go:52] "Watching apiserver" Jan 29 11:54:49.829079 kubelet[2456]: I0129 11:54:49.828986 2456 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:54:49.864955 kubelet[2456]: E0129 11:54:49.862439 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:49.864955 kubelet[2456]: E0129 11:54:49.862643 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:49.864955 kubelet[2456]: E0129 11:54:49.862882 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:49.934725 kubelet[2456]: I0129 11:54:49.934618 2456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9345981239999999 podStartE2EDuration="1.934598124s" podCreationTimestamp="2025-01-29 11:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:54:49.933836145 +0000 UTC m=+1.177239540" watchObservedRunningTime="2025-01-29 11:54:49.934598124 +0000 UTC m=+1.178001509" Jan 29 11:54:49.935254 kubelet[2456]: I0129 11:54:49.935045 2456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.935033288 podStartE2EDuration="1.935033288s" podCreationTimestamp="2025-01-29 11:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:54:49.922886378 +0000 UTC m=+1.166289773" watchObservedRunningTime="2025-01-29 11:54:49.935033288 +0000 UTC m=+1.178436703" Jan 29 11:54:49.954796 kubelet[2456]: I0129 11:54:49.954677 2456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.95462511 podStartE2EDuration="1.95462511s" podCreationTimestamp="2025-01-29 11:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:54:49.942778386 +0000 UTC m=+1.186181781" watchObservedRunningTime="2025-01-29 11:54:49.95462511 +0000 UTC m=+1.198028505" Jan 29 11:54:50.286997 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 29 11:54:50.289324 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:50.294809 systemd[1]: sshd@4-10.0.0.99:22-10.0.0.1:54534.service: Deactivated successfully. Jan 29 11:54:50.297667 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:54:50.297937 systemd[1]: session-5.scope: Consumed 4.363s CPU time, 159.9M memory peak, 0B memory swap peak. Jan 29 11:54:50.298768 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:54:50.300155 systemd-logind[1449]: Removed session 5. Jan 29 11:54:50.863810 kubelet[2456]: E0129 11:54:50.863557 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:52.417450 kernel: hrtimer: interrupt took 45374230 ns Jan 29 11:54:53.159457 kubelet[2456]: E0129 11:54:53.159378 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:53.610272 kubelet[2456]: E0129 11:54:53.610072 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:53.859897 kubelet[2456]: I0129 11:54:53.859831 2456 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:54:53.860742 containerd[1459]: time="2025-01-29T11:54:53.860597704Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:54:53.861340 kubelet[2456]: I0129 11:54:53.860940 2456 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:54:54.586257 systemd[1]: Created slice kubepods-besteffort-pod55ff4372_3d93_4890_867c_124e65a81922.slice - libcontainer container kubepods-besteffort-pod55ff4372_3d93_4890_867c_124e65a81922.slice. Jan 29 11:54:54.601586 systemd[1]: Created slice kubepods-burstable-pod18cea6b5_49b7_4254_9390_70e292e24431.slice - libcontainer container kubepods-burstable-pod18cea6b5_49b7_4254_9390_70e292e24431.slice. Jan 29 11:54:54.723229 kubelet[2456]: I0129 11:54:54.723156 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55ff4372-3d93-4890-867c-124e65a81922-lib-modules\") pod \"kube-proxy-8bbbt\" (UID: \"55ff4372-3d93-4890-867c-124e65a81922\") " pod="kube-system/kube-proxy-8bbbt" Jan 29 11:54:54.723229 kubelet[2456]: I0129 11:54:54.723200 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf9sg\" (UniqueName: \"kubernetes.io/projected/55ff4372-3d93-4890-867c-124e65a81922-kube-api-access-wf9sg\") pod \"kube-proxy-8bbbt\" (UID: \"55ff4372-3d93-4890-867c-124e65a81922\") " pod="kube-system/kube-proxy-8bbbt" Jan 29 11:54:54.723229 kubelet[2456]: I0129 11:54:54.723221 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/18cea6b5-49b7-4254-9390-70e292e24431-cni\") pod \"kube-flannel-ds-svnjh\" (UID: \"18cea6b5-49b7-4254-9390-70e292e24431\") " pod="kube-flannel/kube-flannel-ds-svnjh" Jan 29 11:54:54.723229 kubelet[2456]: I0129 11:54:54.723258 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/55ff4372-3d93-4890-867c-124e65a81922-kube-proxy\") pod \"kube-proxy-8bbbt\" (UID: \"55ff4372-3d93-4890-867c-124e65a81922\") " pod="kube-system/kube-proxy-8bbbt" Jan 29 11:54:54.723229 kubelet[2456]: I0129 11:54:54.723274 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/18cea6b5-49b7-4254-9390-70e292e24431-run\") pod \"kube-flannel-ds-svnjh\" (UID: \"18cea6b5-49b7-4254-9390-70e292e24431\") " pod="kube-flannel/kube-flannel-ds-svnjh" Jan 29 11:54:54.724030 kubelet[2456]: I0129 11:54:54.723288 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/18cea6b5-49b7-4254-9390-70e292e24431-flannel-cfg\") pod \"kube-flannel-ds-svnjh\" (UID: \"18cea6b5-49b7-4254-9390-70e292e24431\") " pod="kube-flannel/kube-flannel-ds-svnjh" Jan 29 11:54:54.724030 kubelet[2456]: I0129 11:54:54.723301 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbvzg\" (UniqueName: \"kubernetes.io/projected/18cea6b5-49b7-4254-9390-70e292e24431-kube-api-access-cbvzg\") pod \"kube-flannel-ds-svnjh\" (UID: \"18cea6b5-49b7-4254-9390-70e292e24431\") " pod="kube-flannel/kube-flannel-ds-svnjh" Jan 29 11:54:54.724030 kubelet[2456]: I0129 11:54:54.723336 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/18cea6b5-49b7-4254-9390-70e292e24431-cni-plugin\") pod \"kube-flannel-ds-svnjh\" (UID: \"18cea6b5-49b7-4254-9390-70e292e24431\") " pod="kube-flannel/kube-flannel-ds-svnjh" Jan 29 11:54:54.724030 kubelet[2456]: I0129 11:54:54.723352 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55ff4372-3d93-4890-867c-124e65a81922-xtables-lock\") pod \"kube-proxy-8bbbt\" (UID: \"55ff4372-3d93-4890-867c-124e65a81922\") " pod="kube-system/kube-proxy-8bbbt" Jan 29 11:54:54.724030 kubelet[2456]: I0129 11:54:54.723365 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18cea6b5-49b7-4254-9390-70e292e24431-xtables-lock\") pod \"kube-flannel-ds-svnjh\" (UID: \"18cea6b5-49b7-4254-9390-70e292e24431\") " pod="kube-flannel/kube-flannel-ds-svnjh" Jan 29 11:54:54.898191 kubelet[2456]: E0129 11:54:54.898001 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:54.899115 containerd[1459]: time="2025-01-29T11:54:54.899033907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8bbbt,Uid:55ff4372-3d93-4890-867c-124e65a81922,Namespace:kube-system,Attempt:0,}" Jan 29 11:54:54.905407 kubelet[2456]: E0129 11:54:54.905354 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:54.906266 containerd[1459]: time="2025-01-29T11:54:54.906189244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-svnjh,Uid:18cea6b5-49b7-4254-9390-70e292e24431,Namespace:kube-flannel,Attempt:0,}" Jan 29 11:54:55.059789 containerd[1459]: time="2025-01-29T11:54:55.059611267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:55.059789 containerd[1459]: time="2025-01-29T11:54:55.059726356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:55.059789 containerd[1459]: time="2025-01-29T11:54:55.059742346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:55.060065 containerd[1459]: time="2025-01-29T11:54:55.059951043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:55.068272 containerd[1459]: time="2025-01-29T11:54:55.067138608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:55.068272 containerd[1459]: time="2025-01-29T11:54:55.067211236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:55.068272 containerd[1459]: time="2025-01-29T11:54:55.067223630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:55.068272 containerd[1459]: time="2025-01-29T11:54:55.067359278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:55.093543 systemd[1]: Started cri-containerd-20bc9c804629bf81f1690aaef4faaf4feb3c3873f85e80b0fc04a436757c9488.scope - libcontainer container 20bc9c804629bf81f1690aaef4faaf4feb3c3873f85e80b0fc04a436757c9488. Jan 29 11:54:55.130998 systemd[1]: Started cri-containerd-cd104ec7332de0210acc7dd02a5703e95c31aa7c2822c3c8e7104fca0a86ad5c.scope - libcontainer container cd104ec7332de0210acc7dd02a5703e95c31aa7c2822c3c8e7104fca0a86ad5c. Jan 29 11:54:55.157676 containerd[1459]: time="2025-01-29T11:54:55.157438277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-svnjh,Uid:18cea6b5-49b7-4254-9390-70e292e24431,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"20bc9c804629bf81f1690aaef4faaf4feb3c3873f85e80b0fc04a436757c9488\"" Jan 29 11:54:55.159008 kubelet[2456]: E0129 11:54:55.158976 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:55.161028 containerd[1459]: time="2025-01-29T11:54:55.160972522Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 29 11:54:55.194567 containerd[1459]: time="2025-01-29T11:54:55.194514040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8bbbt,Uid:55ff4372-3d93-4890-867c-124e65a81922,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd104ec7332de0210acc7dd02a5703e95c31aa7c2822c3c8e7104fca0a86ad5c\"" Jan 29 11:54:55.195694 kubelet[2456]: E0129 11:54:55.195670 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:55.197827 containerd[1459]: time="2025-01-29T11:54:55.197789842Z" level=info msg="CreateContainer within sandbox \"cd104ec7332de0210acc7dd02a5703e95c31aa7c2822c3c8e7104fca0a86ad5c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:54:55.225038 containerd[1459]: time="2025-01-29T11:54:55.224948736Z" level=info msg="CreateContainer within sandbox \"cd104ec7332de0210acc7dd02a5703e95c31aa7c2822c3c8e7104fca0a86ad5c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a3b7458d3fdf94ab254eb1cbaa743c3bcc492da0fd1de90aa98d78aa17258f77\"" Jan 29 11:54:55.225890 containerd[1459]: time="2025-01-29T11:54:55.225859770Z" level=info msg="StartContainer for \"a3b7458d3fdf94ab254eb1cbaa743c3bcc492da0fd1de90aa98d78aa17258f77\"" Jan 29 11:54:55.259429 systemd[1]: Started cri-containerd-a3b7458d3fdf94ab254eb1cbaa743c3bcc492da0fd1de90aa98d78aa17258f77.scope - libcontainer container a3b7458d3fdf94ab254eb1cbaa743c3bcc492da0fd1de90aa98d78aa17258f77. Jan 29 11:54:55.296417 containerd[1459]: time="2025-01-29T11:54:55.296356080Z" level=info msg="StartContainer for \"a3b7458d3fdf94ab254eb1cbaa743c3bcc492da0fd1de90aa98d78aa17258f77\" returns successfully" Jan 29 11:54:55.874499 kubelet[2456]: E0129 11:54:55.874462 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:55.883184 kubelet[2456]: I0129 11:54:55.883099 2456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8bbbt" podStartSLOduration=1.883066036 podStartE2EDuration="1.883066036s" podCreationTimestamp="2025-01-29 11:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:54:55.882429495 +0000 UTC m=+7.125832890" watchObservedRunningTime="2025-01-29 11:54:55.883066036 +0000 UTC m=+7.126469431" Jan 29 11:54:56.867500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971817904.mount: Deactivated successfully. Jan 29 11:54:56.907326 containerd[1459]: time="2025-01-29T11:54:56.907274403Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:56.908092 containerd[1459]: time="2025-01-29T11:54:56.908045809Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 29 11:54:56.909306 containerd[1459]: time="2025-01-29T11:54:56.909275206Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:56.911593 containerd[1459]: time="2025-01-29T11:54:56.911556243Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:56.912285 containerd[1459]: time="2025-01-29T11:54:56.912258057Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.751218286s" Jan 29 11:54:56.912348 containerd[1459]: time="2025-01-29T11:54:56.912289336Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 29 11:54:56.914782 containerd[1459]: time="2025-01-29T11:54:56.914512412Z" level=info msg="CreateContainer within sandbox \"20bc9c804629bf81f1690aaef4faaf4feb3c3873f85e80b0fc04a436757c9488\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 29 11:54:56.928176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361834297.mount: Deactivated successfully. Jan 29 11:54:56.928649 containerd[1459]: time="2025-01-29T11:54:56.928614559Z" level=info msg="CreateContainer within sandbox \"20bc9c804629bf81f1690aaef4faaf4feb3c3873f85e80b0fc04a436757c9488\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"52dc20fa4b44f3faa5a8e1b95bfeeac02ab17154a8a90638d49262edd4421890\"" Jan 29 11:54:56.929182 containerd[1459]: time="2025-01-29T11:54:56.929144266Z" level=info msg="StartContainer for \"52dc20fa4b44f3faa5a8e1b95bfeeac02ab17154a8a90638d49262edd4421890\"" Jan 29 11:54:56.965415 systemd[1]: Started cri-containerd-52dc20fa4b44f3faa5a8e1b95bfeeac02ab17154a8a90638d49262edd4421890.scope - libcontainer container 52dc20fa4b44f3faa5a8e1b95bfeeac02ab17154a8a90638d49262edd4421890. Jan 29 11:54:56.992708 systemd[1]: cri-containerd-52dc20fa4b44f3faa5a8e1b95bfeeac02ab17154a8a90638d49262edd4421890.scope: Deactivated successfully. Jan 29 11:54:56.993112 containerd[1459]: time="2025-01-29T11:54:56.993069814Z" level=info msg="StartContainer for \"52dc20fa4b44f3faa5a8e1b95bfeeac02ab17154a8a90638d49262edd4421890\" returns successfully" Jan 29 11:54:57.269320 containerd[1459]: time="2025-01-29T11:54:57.269092958Z" level=info msg="shim disconnected" id=52dc20fa4b44f3faa5a8e1b95bfeeac02ab17154a8a90638d49262edd4421890 namespace=k8s.io Jan 29 11:54:57.269320 containerd[1459]: time="2025-01-29T11:54:57.269186435Z" level=warning msg="cleaning up after shim disconnected" id=52dc20fa4b44f3faa5a8e1b95bfeeac02ab17154a8a90638d49262edd4421890 namespace=k8s.io Jan 29 11:54:57.269320 containerd[1459]: time="2025-01-29T11:54:57.269202937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:54:57.392035 kubelet[2456]: E0129 11:54:57.391968 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:57.868057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52dc20fa4b44f3faa5a8e1b95bfeeac02ab17154a8a90638d49262edd4421890-rootfs.mount: Deactivated successfully. Jan 29 11:54:57.880061 kubelet[2456]: E0129 11:54:57.879983 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:57.880227 kubelet[2456]: E0129 11:54:57.880172 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:57.880976 containerd[1459]: time="2025-01-29T11:54:57.880911438Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 29 11:54:59.667289 update_engine[1450]: I20250129 11:54:59.666991 1450 update_attempter.cc:509] Updating boot flags... Jan 29 11:54:59.723304 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2846) Jan 29 11:54:59.765290 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2845) Jan 29 11:55:00.919125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3577951291.mount: Deactivated successfully. Jan 29 11:55:02.830264 containerd[1459]: time="2025-01-29T11:55:02.830186866Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:02.831855 containerd[1459]: time="2025-01-29T11:55:02.831788336Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 29 11:55:02.833345 containerd[1459]: time="2025-01-29T11:55:02.833271994Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:02.836671 containerd[1459]: time="2025-01-29T11:55:02.836623258Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:55:02.837640 containerd[1459]: time="2025-01-29T11:55:02.837594676Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.956644143s" Jan 29 11:55:02.837640 containerd[1459]: time="2025-01-29T11:55:02.837636134Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 29 11:55:02.839834 containerd[1459]: time="2025-01-29T11:55:02.839796483Z" level=info msg="CreateContainer within sandbox \"20bc9c804629bf81f1690aaef4faaf4feb3c3873f85e80b0fc04a436757c9488\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:55:02.854709 containerd[1459]: time="2025-01-29T11:55:02.854664302Z" level=info msg="CreateContainer within sandbox \"20bc9c804629bf81f1690aaef4faaf4feb3c3873f85e80b0fc04a436757c9488\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f80f0352a583453cd6981a71f30c9382d9c25c8ddc5f8e558875609e4c8282f6\"" Jan 29 11:55:02.855087 containerd[1459]: time="2025-01-29T11:55:02.855050573Z" level=info msg="StartContainer for \"f80f0352a583453cd6981a71f30c9382d9c25c8ddc5f8e558875609e4c8282f6\"" Jan 29 11:55:02.891452 systemd[1]: Started cri-containerd-f80f0352a583453cd6981a71f30c9382d9c25c8ddc5f8e558875609e4c8282f6.scope - libcontainer container f80f0352a583453cd6981a71f30c9382d9c25c8ddc5f8e558875609e4c8282f6. Jan 29 11:55:02.923384 systemd[1]: cri-containerd-f80f0352a583453cd6981a71f30c9382d9c25c8ddc5f8e558875609e4c8282f6.scope: Deactivated successfully. Jan 29 11:55:02.976689 containerd[1459]: time="2025-01-29T11:55:02.976626678Z" level=info msg="StartContainer for \"f80f0352a583453cd6981a71f30c9382d9c25c8ddc5f8e558875609e4c8282f6\" returns successfully" Jan 29 11:55:02.998167 kubelet[2456]: I0129 11:55:02.997304 2456 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:55:03.033314 systemd[1]: Created slice kubepods-burstable-podb8797ff2_fe98_4dcd_a843_658c634cc2a1.slice - libcontainer container kubepods-burstable-podb8797ff2_fe98_4dcd_a843_658c634cc2a1.slice. Jan 29 11:55:03.039015 systemd[1]: Created slice kubepods-burstable-pod0e6a4241_bc4c_40d6_89b1_ccf943861f42.slice - libcontainer container kubepods-burstable-pod0e6a4241_bc4c_40d6_89b1_ccf943861f42.slice. Jan 29 11:55:03.163826 kubelet[2456]: E0129 11:55:03.163754 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:03.167221 containerd[1459]: time="2025-01-29T11:55:03.167107991Z" level=info msg="shim disconnected" id=f80f0352a583453cd6981a71f30c9382d9c25c8ddc5f8e558875609e4c8282f6 namespace=k8s.io Jan 29 11:55:03.167221 containerd[1459]: time="2025-01-29T11:55:03.167184726Z" level=warning msg="cleaning up after shim disconnected" id=f80f0352a583453cd6981a71f30c9382d9c25c8ddc5f8e558875609e4c8282f6 namespace=k8s.io Jan 29 11:55:03.167221 containerd[1459]: time="2025-01-29T11:55:03.167195717Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:55:03.177725 kubelet[2456]: I0129 11:55:03.177667 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8797ff2-fe98-4dcd-a843-658c634cc2a1-config-volume\") pod \"coredns-6f6b679f8f-rv5f2\" (UID: \"b8797ff2-fe98-4dcd-a843-658c634cc2a1\") " pod="kube-system/coredns-6f6b679f8f-rv5f2" Jan 29 11:55:03.177868 kubelet[2456]: I0129 11:55:03.177754 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqn66\" (UniqueName: \"kubernetes.io/projected/b8797ff2-fe98-4dcd-a843-658c634cc2a1-kube-api-access-mqn66\") pod \"coredns-6f6b679f8f-rv5f2\" (UID: \"b8797ff2-fe98-4dcd-a843-658c634cc2a1\") " pod="kube-system/coredns-6f6b679f8f-rv5f2" Jan 29 11:55:03.177868 kubelet[2456]: I0129 11:55:03.177791 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e6a4241-bc4c-40d6-89b1-ccf943861f42-config-volume\") pod \"coredns-6f6b679f8f-sx4kb\" (UID: \"0e6a4241-bc4c-40d6-89b1-ccf943861f42\") " pod="kube-system/coredns-6f6b679f8f-sx4kb" Jan 29 11:55:03.177868 kubelet[2456]: I0129 11:55:03.177824 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlnsk\" (UniqueName: \"kubernetes.io/projected/0e6a4241-bc4c-40d6-89b1-ccf943861f42-kube-api-access-dlnsk\") pod \"coredns-6f6b679f8f-sx4kb\" (UID: \"0e6a4241-bc4c-40d6-89b1-ccf943861f42\") " pod="kube-system/coredns-6f6b679f8f-sx4kb" Jan 29 11:55:03.381413 kubelet[2456]: E0129 11:55:03.381356 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:03.381585 kubelet[2456]: E0129 11:55:03.381499 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:03.382168 containerd[1459]: time="2025-01-29T11:55:03.382108245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rv5f2,Uid:b8797ff2-fe98-4dcd-a843-658c634cc2a1,Namespace:kube-system,Attempt:0,}" Jan 29 11:55:03.382168 containerd[1459]: time="2025-01-29T11:55:03.382169571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sx4kb,Uid:0e6a4241-bc4c-40d6-89b1-ccf943861f42,Namespace:kube-system,Attempt:0,}" Jan 29 11:55:03.502726 containerd[1459]: time="2025-01-29T11:55:03.502534812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rv5f2,Uid:b8797ff2-fe98-4dcd-a843-658c634cc2a1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2c5a134d6638413cff13b1fa5361ca039ef2dade84abb13e876c323d02911b5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 11:55:03.502984 kubelet[2456]: E0129 11:55:03.502855 2456 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2c5a134d6638413cff13b1fa5361ca039ef2dade84abb13e876c323d02911b5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 11:55:03.502984 kubelet[2456]: E0129 11:55:03.502950 2456 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2c5a134d6638413cff13b1fa5361ca039ef2dade84abb13e876c323d02911b5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-rv5f2" Jan 29 11:55:03.502984 kubelet[2456]: E0129 11:55:03.502975 2456 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2c5a134d6638413cff13b1fa5361ca039ef2dade84abb13e876c323d02911b5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-rv5f2" Jan 29 11:55:03.503154 kubelet[2456]: E0129 11:55:03.503017 2456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-rv5f2_kube-system(b8797ff2-fe98-4dcd-a843-658c634cc2a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-rv5f2_kube-system(b8797ff2-fe98-4dcd-a843-658c634cc2a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2c5a134d6638413cff13b1fa5361ca039ef2dade84abb13e876c323d02911b5\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-rv5f2" podUID="b8797ff2-fe98-4dcd-a843-658c634cc2a1" Jan 29 11:55:03.507500 containerd[1459]: time="2025-01-29T11:55:03.507376880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sx4kb,Uid:0e6a4241-bc4c-40d6-89b1-ccf943861f42,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"990ebca6e6101d91a0a12706bd7d2038edabc389817703155235caa440d8f2dd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 11:55:03.510491 kubelet[2456]: E0129 11:55:03.507676 2456 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"990ebca6e6101d91a0a12706bd7d2038edabc389817703155235caa440d8f2dd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 11:55:03.510491 kubelet[2456]: E0129 11:55:03.507750 2456 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"990ebca6e6101d91a0a12706bd7d2038edabc389817703155235caa440d8f2dd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-sx4kb" Jan 29 11:55:03.510491 kubelet[2456]: E0129 11:55:03.507775 2456 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"990ebca6e6101d91a0a12706bd7d2038edabc389817703155235caa440d8f2dd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-sx4kb" Jan 29 11:55:03.510491 kubelet[2456]: E0129 11:55:03.510293 2456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-sx4kb_kube-system(0e6a4241-bc4c-40d6-89b1-ccf943861f42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-sx4kb_kube-system(0e6a4241-bc4c-40d6-89b1-ccf943861f42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"990ebca6e6101d91a0a12706bd7d2038edabc389817703155235caa440d8f2dd\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-sx4kb" podUID="0e6a4241-bc4c-40d6-89b1-ccf943861f42" Jan 29 11:55:03.623438 kubelet[2456]: E0129 11:55:03.623398 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:03.853386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f80f0352a583453cd6981a71f30c9382d9c25c8ddc5f8e558875609e4c8282f6-rootfs.mount: Deactivated successfully. Jan 29 11:55:03.894060 kubelet[2456]: E0129 11:55:03.893964 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:03.894060 kubelet[2456]: E0129 11:55:03.894024 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:03.895864 containerd[1459]: time="2025-01-29T11:55:03.895818568Z" level=info msg="CreateContainer within sandbox \"20bc9c804629bf81f1690aaef4faaf4feb3c3873f85e80b0fc04a436757c9488\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 29 11:55:03.952767 containerd[1459]: time="2025-01-29T11:55:03.952693617Z" level=info msg="CreateContainer within sandbox \"20bc9c804629bf81f1690aaef4faaf4feb3c3873f85e80b0fc04a436757c9488\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"6c898c6cf046d732dda368507ed236024a9d232425ccf040d88cde0e2c2770d1\"" Jan 29 11:55:03.953397 containerd[1459]: time="2025-01-29T11:55:03.953364787Z" level=info msg="StartContainer for \"6c898c6cf046d732dda368507ed236024a9d232425ccf040d88cde0e2c2770d1\"" Jan 29 11:55:03.985479 systemd[1]: Started cri-containerd-6c898c6cf046d732dda368507ed236024a9d232425ccf040d88cde0e2c2770d1.scope - libcontainer container 6c898c6cf046d732dda368507ed236024a9d232425ccf040d88cde0e2c2770d1. Jan 29 11:55:04.014990 containerd[1459]: time="2025-01-29T11:55:04.014927860Z" level=info msg="StartContainer for \"6c898c6cf046d732dda368507ed236024a9d232425ccf040d88cde0e2c2770d1\" returns successfully" Jan 29 11:55:04.897381 kubelet[2456]: E0129 11:55:04.897341 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:04.906193 kubelet[2456]: I0129 11:55:04.906077 2456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-svnjh" podStartSLOduration=3.227797584 podStartE2EDuration="10.906055881s" podCreationTimestamp="2025-01-29 11:54:54 +0000 UTC" firstStartedPulling="2025-01-29 11:54:55.160285154 +0000 UTC m=+6.403688549" lastFinishedPulling="2025-01-29 11:55:02.838543451 +0000 UTC m=+14.081946846" observedRunningTime="2025-01-29 11:55:04.905629366 +0000 UTC m=+16.149032761" watchObservedRunningTime="2025-01-29 11:55:04.906055881 +0000 UTC m=+16.149459277" Jan 29 11:55:05.062202 systemd-networkd[1393]: flannel.1: Link UP Jan 29 11:55:05.062212 systemd-networkd[1393]: flannel.1: Gained carrier Jan 29 11:55:05.898635 kubelet[2456]: E0129 11:55:05.898592 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:06.877420 systemd-networkd[1393]: flannel.1: Gained IPv6LL Jan 29 11:55:14.645042 systemd[1]: Started sshd@5-10.0.0.99:22-10.0.0.1:59244.service - OpenSSH per-connection server daemon (10.0.0.1:59244). Jan 29 11:55:14.741988 sshd[3137]: Accepted publickey for core from 10.0.0.1 port 59244 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:14.744386 sshd[3137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:14.757921 systemd-logind[1449]: New session 6 of user core. Jan 29 11:55:14.768439 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:55:14.905061 sshd[3137]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:14.909204 systemd[1]: sshd@5-10.0.0.99:22-10.0.0.1:59244.service: Deactivated successfully. Jan 29 11:55:14.911855 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:55:14.912644 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:55:14.914284 systemd-logind[1449]: Removed session 6. Jan 29 11:55:15.851898 kubelet[2456]: E0129 11:55:15.851836 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:15.852966 containerd[1459]: time="2025-01-29T11:55:15.852330074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rv5f2,Uid:b8797ff2-fe98-4dcd-a843-658c634cc2a1,Namespace:kube-system,Attempt:0,}" Jan 29 11:55:16.370011 systemd-networkd[1393]: cni0: Link UP Jan 29 11:55:16.370023 systemd-networkd[1393]: cni0: Gained carrier Jan 29 11:55:16.375688 systemd-networkd[1393]: cni0: Lost carrier Jan 29 11:55:16.379798 systemd-networkd[1393]: vethd54f673c: Link UP Jan 29 11:55:16.381114 kernel: cni0: port 1(vethd54f673c) entered blocking state Jan 29 11:55:16.381205 kernel: cni0: port 1(vethd54f673c) entered disabled state Jan 29 11:55:16.381261 kernel: vethd54f673c: entered allmulticast mode Jan 29 11:55:16.383058 kernel: vethd54f673c: entered promiscuous mode Jan 29 11:55:16.385307 kernel: cni0: port 1(vethd54f673c) entered blocking state Jan 29 11:55:16.385368 kernel: cni0: port 1(vethd54f673c) entered forwarding state Jan 29 11:55:16.386359 kernel: cni0: port 1(vethd54f673c) entered disabled state Jan 29 11:55:16.397341 systemd-networkd[1393]: vethd54f673c: Gained carrier Jan 29 11:55:16.397914 systemd-networkd[1393]: cni0: Gained carrier Jan 29 11:55:16.399465 kernel: cni0: port 1(vethd54f673c) entered blocking state Jan 29 11:55:16.399698 kernel: cni0: port 1(vethd54f673c) entered forwarding state Jan 29 11:55:16.449394 containerd[1459]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Jan 29 11:55:16.449394 containerd[1459]: delegateAdd: netconf sent to delegate plugin: Jan 29 11:55:16.469646 containerd[1459]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T11:55:16.469507214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:55:16.469646 containerd[1459]: time="2025-01-29T11:55:16.469589890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:55:16.469894 containerd[1459]: time="2025-01-29T11:55:16.469618864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:16.469894 containerd[1459]: time="2025-01-29T11:55:16.469740523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:16.503415 systemd[1]: Started cri-containerd-1d85f1931f7e58c7427eb613b3d6b60ea6484a94011b3527c95ef8ca0424106e.scope - libcontainer container 1d85f1931f7e58c7427eb613b3d6b60ea6484a94011b3527c95ef8ca0424106e. Jan 29 11:55:16.518234 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:55:16.545834 containerd[1459]: time="2025-01-29T11:55:16.545756365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rv5f2,Uid:b8797ff2-fe98-4dcd-a843-658c634cc2a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d85f1931f7e58c7427eb613b3d6b60ea6484a94011b3527c95ef8ca0424106e\"" Jan 29 11:55:16.546579 kubelet[2456]: E0129 11:55:16.546553 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:16.548566 containerd[1459]: time="2025-01-29T11:55:16.548508004Z" level=info msg="CreateContainer within sandbox \"1d85f1931f7e58c7427eb613b3d6b60ea6484a94011b3527c95ef8ca0424106e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:55:16.570434 containerd[1459]: time="2025-01-29T11:55:16.570371194Z" level=info msg="CreateContainer within sandbox \"1d85f1931f7e58c7427eb613b3d6b60ea6484a94011b3527c95ef8ca0424106e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d74f12c4aa7e8e0d99483c591e678b9c6a23404aec7bd21f3b5829e79b4cb48f\"" Jan 29 11:55:16.571066 containerd[1459]: time="2025-01-29T11:55:16.571038059Z" level=info msg="StartContainer for \"d74f12c4aa7e8e0d99483c591e678b9c6a23404aec7bd21f3b5829e79b4cb48f\"" Jan 29 11:55:16.602457 systemd[1]: Started cri-containerd-d74f12c4aa7e8e0d99483c591e678b9c6a23404aec7bd21f3b5829e79b4cb48f.scope - libcontainer container d74f12c4aa7e8e0d99483c591e678b9c6a23404aec7bd21f3b5829e79b4cb48f. Jan 29 11:55:16.718762 containerd[1459]: time="2025-01-29T11:55:16.718696661Z" level=info msg="StartContainer for \"d74f12c4aa7e8e0d99483c591e678b9c6a23404aec7bd21f3b5829e79b4cb48f\" returns successfully" Jan 29 11:55:16.851756 kubelet[2456]: E0129 11:55:16.851688 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:16.852582 containerd[1459]: time="2025-01-29T11:55:16.852365994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sx4kb,Uid:0e6a4241-bc4c-40d6-89b1-ccf943861f42,Namespace:kube-system,Attempt:0,}" Jan 29 11:55:16.878306 systemd-networkd[1393]: vethda3629c8: Link UP Jan 29 11:55:16.880345 kernel: cni0: port 2(vethda3629c8) entered blocking state Jan 29 11:55:16.880635 kernel: cni0: port 2(vethda3629c8) entered disabled state Jan 29 11:55:16.880682 kernel: vethda3629c8: entered allmulticast mode Jan 29 11:55:16.881661 kernel: vethda3629c8: entered promiscuous mode Jan 29 11:55:16.882474 kernel: cni0: port 2(vethda3629c8) entered blocking state Jan 29 11:55:16.882520 kernel: cni0: port 2(vethda3629c8) entered forwarding state Jan 29 11:55:16.891464 systemd-networkd[1393]: vethda3629c8: Gained carrier Jan 29 11:55:16.894124 containerd[1459]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000ae8e8), "name":"cbr0", "type":"bridge"} Jan 29 11:55:16.894124 containerd[1459]: delegateAdd: netconf sent to delegate plugin: Jan 29 11:55:16.916545 containerd[1459]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T11:55:16.916389866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:55:16.916545 containerd[1459]: time="2025-01-29T11:55:16.916488602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:55:16.916545 containerd[1459]: time="2025-01-29T11:55:16.916506506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:16.916792 containerd[1459]: time="2025-01-29T11:55:16.916638624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:55:16.921802 kubelet[2456]: E0129 11:55:16.921735 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:16.936853 systemd[1]: Started cri-containerd-44e16b334d3d7640f7d876f5da17593b3e39d2e147a39c87ca92b1d7abef7a39.scope - libcontainer container 44e16b334d3d7640f7d876f5da17593b3e39d2e147a39c87ca92b1d7abef7a39. Jan 29 11:55:16.946021 kubelet[2456]: I0129 11:55:16.945948 2456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rv5f2" podStartSLOduration=22.945926848 podStartE2EDuration="22.945926848s" podCreationTimestamp="2025-01-29 11:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:55:16.935739995 +0000 UTC m=+28.179143390" watchObservedRunningTime="2025-01-29 11:55:16.945926848 +0000 UTC m=+28.189330243" Jan 29 11:55:16.961602 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:55:16.995463 containerd[1459]: time="2025-01-29T11:55:16.995276281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sx4kb,Uid:0e6a4241-bc4c-40d6-89b1-ccf943861f42,Namespace:kube-system,Attempt:0,} returns sandbox id \"44e16b334d3d7640f7d876f5da17593b3e39d2e147a39c87ca92b1d7abef7a39\"" Jan 29 11:55:16.996547 kubelet[2456]: E0129 11:55:16.996352 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:16.999534 containerd[1459]: time="2025-01-29T11:55:16.999479632Z" level=info msg="CreateContainer within sandbox \"44e16b334d3d7640f7d876f5da17593b3e39d2e147a39c87ca92b1d7abef7a39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:55:17.019006 containerd[1459]: time="2025-01-29T11:55:17.018884587Z" level=info msg="CreateContainer within sandbox \"44e16b334d3d7640f7d876f5da17593b3e39d2e147a39c87ca92b1d7abef7a39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c82f3ab14f6010b3769dfd9c5cfad61af12191137439a2d14607afb406eb5561\"" Jan 29 11:55:17.019631 containerd[1459]: time="2025-01-29T11:55:17.019588982Z" level=info msg="StartContainer for \"c82f3ab14f6010b3769dfd9c5cfad61af12191137439a2d14607afb406eb5561\"" Jan 29 11:55:17.052604 systemd[1]: Started cri-containerd-c82f3ab14f6010b3769dfd9c5cfad61af12191137439a2d14607afb406eb5561.scope - libcontainer container c82f3ab14f6010b3769dfd9c5cfad61af12191137439a2d14607afb406eb5561. Jan 29 11:55:17.089557 containerd[1459]: time="2025-01-29T11:55:17.089472781Z" level=info msg="StartContainer for \"c82f3ab14f6010b3769dfd9c5cfad61af12191137439a2d14607afb406eb5561\" returns successfully" Jan 29 11:55:17.693439 systemd-networkd[1393]: cni0: Gained IPv6LL Jan 29 11:55:17.925948 kubelet[2456]: E0129 11:55:17.925907 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:17.926465 kubelet[2456]: E0129 11:55:17.925973 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:17.961533 kubelet[2456]: I0129 11:55:17.960710 2456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-sx4kb" podStartSLOduration=23.960691812 podStartE2EDuration="23.960691812s" podCreationTimestamp="2025-01-29 11:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:55:17.960449697 +0000 UTC m=+29.203853092" watchObservedRunningTime="2025-01-29 11:55:17.960691812 +0000 UTC m=+29.204095217" Jan 29 11:55:18.141509 systemd-networkd[1393]: vethda3629c8: Gained IPv6LL Jan 29 11:55:18.205453 systemd-networkd[1393]: vethd54f673c: Gained IPv6LL Jan 29 11:55:18.928128 kubelet[2456]: E0129 11:55:18.928075 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:19.918980 systemd[1]: Started sshd@6-10.0.0.99:22-10.0.0.1:59260.service - OpenSSH per-connection server daemon (10.0.0.1:59260). Jan 29 11:55:19.960360 sshd[3419]: Accepted publickey for core from 10.0.0.1 port 59260 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:19.963123 sshd[3419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:19.967663 systemd-logind[1449]: New session 7 of user core. Jan 29 11:55:19.976684 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:55:20.105729 sshd[3419]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:20.110600 systemd[1]: sshd@6-10.0.0.99:22-10.0.0.1:59260.service: Deactivated successfully. Jan 29 11:55:20.112617 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:55:20.113384 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:55:20.114534 systemd-logind[1449]: Removed session 7. Jan 29 11:55:23.383112 kubelet[2456]: E0129 11:55:23.383052 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:23.937865 kubelet[2456]: E0129 11:55:23.937826 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:55:25.119728 systemd[1]: Started sshd@7-10.0.0.99:22-10.0.0.1:36982.service - OpenSSH per-connection server daemon (10.0.0.1:36982). Jan 29 11:55:25.172724 sshd[3459]: Accepted publickey for core from 10.0.0.1 port 36982 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:25.174933 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:25.181146 systemd-logind[1449]: New session 8 of user core. Jan 29 11:55:25.186954 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:55:25.304752 sshd[3459]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:25.309721 systemd[1]: sshd@7-10.0.0.99:22-10.0.0.1:36982.service: Deactivated successfully. Jan 29 11:55:25.312190 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:55:25.312849 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:55:25.313890 systemd-logind[1449]: Removed session 8. Jan 29 11:55:30.316419 systemd[1]: Started sshd@8-10.0.0.99:22-10.0.0.1:36994.service - OpenSSH per-connection server daemon (10.0.0.1:36994). Jan 29 11:55:30.356220 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 36994 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:30.358216 sshd[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:30.362583 systemd-logind[1449]: New session 9 of user core. Jan 29 11:55:30.371399 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:55:30.488977 sshd[3518]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:30.506789 systemd[1]: sshd@8-10.0.0.99:22-10.0.0.1:36994.service: Deactivated successfully. Jan 29 11:55:30.509012 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:55:30.510759 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:55:30.516531 systemd[1]: Started sshd@9-10.0.0.99:22-10.0.0.1:36998.service - OpenSSH per-connection server daemon (10.0.0.1:36998). Jan 29 11:55:30.517570 systemd-logind[1449]: Removed session 9. Jan 29 11:55:30.551305 sshd[3534]: Accepted publickey for core from 10.0.0.1 port 36998 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:30.553013 sshd[3534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:30.557149 systemd-logind[1449]: New session 10 of user core. Jan 29 11:55:30.567381 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:55:30.716652 sshd[3534]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:30.726201 systemd[1]: sshd@9-10.0.0.99:22-10.0.0.1:36998.service: Deactivated successfully. Jan 29 11:55:30.730230 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:55:30.735479 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:55:30.744735 systemd[1]: Started sshd@10-10.0.0.99:22-10.0.0.1:37000.service - OpenSSH per-connection server daemon (10.0.0.1:37000). Jan 29 11:55:30.751382 systemd-logind[1449]: Removed session 10. Jan 29 11:55:30.792079 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 37000 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:30.794032 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:30.798329 systemd-logind[1449]: New session 11 of user core. Jan 29 11:55:30.807476 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:55:30.916793 sshd[3547]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:30.920700 systemd[1]: sshd@10-10.0.0.99:22-10.0.0.1:37000.service: Deactivated successfully. Jan 29 11:55:30.922620 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:55:30.923291 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:55:30.924226 systemd-logind[1449]: Removed session 11. Jan 29 11:55:35.932744 systemd[1]: Started sshd@11-10.0.0.99:22-10.0.0.1:47260.service - OpenSSH per-connection server daemon (10.0.0.1:47260). Jan 29 11:55:35.973114 sshd[3583]: Accepted publickey for core from 10.0.0.1 port 47260 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:35.974848 sshd[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:35.978926 systemd-logind[1449]: New session 12 of user core. Jan 29 11:55:35.988387 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:55:36.105292 sshd[3583]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:36.108604 systemd[1]: sshd@11-10.0.0.99:22-10.0.0.1:47260.service: Deactivated successfully. Jan 29 11:55:36.110606 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:55:36.112137 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:55:36.113299 systemd-logind[1449]: Removed session 12. Jan 29 11:55:41.117859 systemd[1]: Started sshd@12-10.0.0.99:22-10.0.0.1:35556.service - OpenSSH per-connection server daemon (10.0.0.1:35556). Jan 29 11:55:41.157194 sshd[3619]: Accepted publickey for core from 10.0.0.1 port 35556 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:41.159105 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:41.163648 systemd-logind[1449]: New session 13 of user core. Jan 29 11:55:41.178543 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:55:41.289959 sshd[3619]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:41.295412 systemd[1]: sshd@12-10.0.0.99:22-10.0.0.1:35556.service: Deactivated successfully. Jan 29 11:55:41.297581 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:55:41.298425 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:55:41.299479 systemd-logind[1449]: Removed session 13. Jan 29 11:55:46.302801 systemd[1]: Started sshd@13-10.0.0.99:22-10.0.0.1:35558.service - OpenSSH per-connection server daemon (10.0.0.1:35558). Jan 29 11:55:46.348459 sshd[3654]: Accepted publickey for core from 10.0.0.1 port 35558 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:46.350725 sshd[3654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:46.355235 systemd-logind[1449]: New session 14 of user core. Jan 29 11:55:46.363568 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:55:46.475818 sshd[3654]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:46.481408 systemd[1]: sshd@13-10.0.0.99:22-10.0.0.1:35558.service: Deactivated successfully. Jan 29 11:55:46.483875 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:55:46.484641 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:55:46.485573 systemd-logind[1449]: Removed session 14. Jan 29 11:55:51.487629 systemd[1]: Started sshd@14-10.0.0.99:22-10.0.0.1:46286.service - OpenSSH per-connection server daemon (10.0.0.1:46286). Jan 29 11:55:51.528755 sshd[3692]: Accepted publickey for core from 10.0.0.1 port 46286 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:51.530511 sshd[3692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:51.535397 systemd-logind[1449]: New session 15 of user core. Jan 29 11:55:51.548414 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:55:51.653944 sshd[3692]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:51.664482 systemd[1]: sshd@14-10.0.0.99:22-10.0.0.1:46286.service: Deactivated successfully. Jan 29 11:55:51.666413 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:55:51.667869 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:55:51.673499 systemd[1]: Started sshd@15-10.0.0.99:22-10.0.0.1:46302.service - OpenSSH per-connection server daemon (10.0.0.1:46302). Jan 29 11:55:51.674466 systemd-logind[1449]: Removed session 15. Jan 29 11:55:51.708328 sshd[3706]: Accepted publickey for core from 10.0.0.1 port 46302 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:51.709901 sshd[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:51.714042 systemd-logind[1449]: New session 16 of user core. Jan 29 11:55:51.725409 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:55:51.912992 sshd[3706]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:51.923953 systemd[1]: sshd@15-10.0.0.99:22-10.0.0.1:46302.service: Deactivated successfully. Jan 29 11:55:51.926105 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:55:51.927844 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:55:51.934830 systemd[1]: Started sshd@16-10.0.0.99:22-10.0.0.1:46310.service - OpenSSH per-connection server daemon (10.0.0.1:46310). Jan 29 11:55:51.935864 systemd-logind[1449]: Removed session 16. Jan 29 11:55:51.974898 sshd[3718]: Accepted publickey for core from 10.0.0.1 port 46310 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:51.976865 sshd[3718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:51.985067 systemd-logind[1449]: New session 17 of user core. Jan 29 11:55:51.992403 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:55:53.396064 sshd[3718]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:53.408819 systemd[1]: sshd@16-10.0.0.99:22-10.0.0.1:46310.service: Deactivated successfully. Jan 29 11:55:53.415646 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:55:53.418533 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:55:53.422633 systemd-logind[1449]: Removed session 17. Jan 29 11:55:53.432132 systemd[1]: Started sshd@17-10.0.0.99:22-10.0.0.1:46316.service - OpenSSH per-connection server daemon (10.0.0.1:46316). Jan 29 11:55:53.468217 sshd[3739]: Accepted publickey for core from 10.0.0.1 port 46316 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:53.470176 sshd[3739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:53.475296 systemd-logind[1449]: New session 18 of user core. Jan 29 11:55:53.484446 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:55:53.700902 sshd[3739]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:53.711025 systemd[1]: sshd@17-10.0.0.99:22-10.0.0.1:46316.service: Deactivated successfully. Jan 29 11:55:53.713479 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:55:53.715169 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:55:53.723743 systemd[1]: Started sshd@18-10.0.0.99:22-10.0.0.1:46320.service - OpenSSH per-connection server daemon (10.0.0.1:46320). Jan 29 11:55:53.724912 systemd-logind[1449]: Removed session 18. Jan 29 11:55:53.761803 sshd[3751]: Accepted publickey for core from 10.0.0.1 port 46320 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:53.763917 sshd[3751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:53.768785 systemd-logind[1449]: New session 19 of user core. Jan 29 11:55:53.780438 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:55:53.891428 sshd[3751]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:53.895466 systemd[1]: sshd@18-10.0.0.99:22-10.0.0.1:46320.service: Deactivated successfully. Jan 29 11:55:53.897544 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:55:53.898127 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:55:53.899029 systemd-logind[1449]: Removed session 19. Jan 29 11:55:58.909067 systemd[1]: Started sshd@19-10.0.0.99:22-10.0.0.1:46330.service - OpenSSH per-connection server daemon (10.0.0.1:46330). Jan 29 11:55:58.951176 sshd[3788]: Accepted publickey for core from 10.0.0.1 port 46330 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:55:58.953346 sshd[3788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:55:58.958357 systemd-logind[1449]: New session 20 of user core. Jan 29 11:55:58.973530 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:55:59.085620 sshd[3788]: pam_unix(sshd:session): session closed for user core Jan 29 11:55:59.089808 systemd[1]: sshd@19-10.0.0.99:22-10.0.0.1:46330.service: Deactivated successfully. Jan 29 11:55:59.091925 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:55:59.092723 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:55:59.093805 systemd-logind[1449]: Removed session 20. Jan 29 11:56:04.102227 systemd[1]: Started sshd@20-10.0.0.99:22-10.0.0.1:45580.service - OpenSSH per-connection server daemon (10.0.0.1:45580). Jan 29 11:56:04.144817 sshd[3827]: Accepted publickey for core from 10.0.0.1 port 45580 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:04.146706 sshd[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:04.150936 systemd-logind[1449]: New session 21 of user core. Jan 29 11:56:04.164519 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:56:04.290974 sshd[3827]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:04.296035 systemd[1]: sshd@20-10.0.0.99:22-10.0.0.1:45580.service: Deactivated successfully. Jan 29 11:56:04.298569 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:56:04.299373 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:56:04.300477 systemd-logind[1449]: Removed session 21. Jan 29 11:56:09.305156 systemd[1]: Started sshd@21-10.0.0.99:22-10.0.0.1:45582.service - OpenSSH per-connection server daemon (10.0.0.1:45582). Jan 29 11:56:09.345735 sshd[3862]: Accepted publickey for core from 10.0.0.1 port 45582 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:09.348261 sshd[3862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:09.353377 systemd-logind[1449]: New session 22 of user core. Jan 29 11:56:09.360429 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:56:09.473163 sshd[3862]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:09.477690 systemd[1]: sshd@21-10.0.0.99:22-10.0.0.1:45582.service: Deactivated successfully. Jan 29 11:56:09.480559 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:56:09.481489 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:56:09.482788 systemd-logind[1449]: Removed session 22. Jan 29 11:56:10.852608 kubelet[2456]: E0129 11:56:10.852535 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:56:14.484438 systemd[1]: Started sshd@22-10.0.0.99:22-10.0.0.1:51356.service - OpenSSH per-connection server daemon (10.0.0.1:51356). Jan 29 11:56:14.529474 sshd[3897]: Accepted publickey for core from 10.0.0.1 port 51356 ssh2: RSA SHA256:e5TXI4mefZTIlTcMmQXatNEXm0ZI8GsdQYXCeKdjFwk Jan 29 11:56:14.531188 sshd[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:56:14.535530 systemd-logind[1449]: New session 23 of user core. Jan 29 11:56:14.547408 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:56:14.659312 sshd[3897]: pam_unix(sshd:session): session closed for user core Jan 29 11:56:14.664035 systemd[1]: sshd@22-10.0.0.99:22-10.0.0.1:51356.service: Deactivated successfully. Jan 29 11:56:14.666219 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:56:14.666837 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:56:14.667757 systemd-logind[1449]: Removed session 23. Jan 29 11:56:15.852482 kubelet[2456]: E0129 11:56:15.852431 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"