Jan 17 12:19:07.923196 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:19:07.923218 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:19:07.923232 kernel: BIOS-provided physical RAM map: Jan 17 12:19:07.923240 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 12:19:07.923247 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 12:19:07.923254 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 12:19:07.923261 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 12:19:07.923267 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 12:19:07.923274 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 12:19:07.923280 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 12:19:07.923289 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 12:19:07.923296 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 12:19:07.923306 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 12:19:07.923312 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 12:19:07.923323 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 12:19:07.923330 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 12:19:07.923339 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 12:19:07.923346 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 12:19:07.923353 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 12:19:07.923360 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 12:19:07.923366 kernel: NX (Execute Disable) protection: active Jan 17 12:19:07.923373 kernel: APIC: Static calls initialized Jan 17 12:19:07.923388 kernel: efi: EFI v2.7 by EDK II Jan 17 12:19:07.923395 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 17 12:19:07.923402 kernel: SMBIOS 2.8 present. Jan 17 12:19:07.923409 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 12:19:07.923415 kernel: Hypervisor detected: KVM Jan 17 12:19:07.923425 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:19:07.923432 kernel: kvm-clock: using sched offset of 4887910400 cycles Jan 17 12:19:07.923439 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:19:07.923446 kernel: tsc: Detected 2794.748 MHz processor Jan 17 12:19:07.923454 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:19:07.923461 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:19:07.923468 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 12:19:07.923475 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 12:19:07.923482 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:19:07.923492 kernel: Using GB pages for direct mapping Jan 17 12:19:07.923499 kernel: Secure boot disabled Jan 17 12:19:07.923506 kernel: ACPI: Early table checksum verification disabled Jan 17 12:19:07.923513 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 12:19:07.923524 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 12:19:07.923531 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:07.923539 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:07.923549 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 12:19:07.923556 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:07.923566 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:07.923573 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:07.923581 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:07.923588 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 12:19:07.923595 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 12:19:07.923605 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 17 12:19:07.923612 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 12:19:07.923620 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 12:19:07.923627 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 12:19:07.923634 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 12:19:07.923642 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 12:19:07.923652 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 12:19:07.923661 kernel: No NUMA configuration found Jan 17 12:19:07.923671 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 12:19:07.923681 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 12:19:07.923688 kernel: Zone ranges: Jan 17 12:19:07.923696 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:19:07.923703 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 12:19:07.923710 kernel: Normal empty Jan 17 12:19:07.923717 kernel: Movable zone start for each node Jan 17 12:19:07.923725 kernel: Early memory node ranges Jan 17 12:19:07.923732 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 12:19:07.923739 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 12:19:07.923746 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 12:19:07.923757 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 12:19:07.923764 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 12:19:07.923772 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 12:19:07.923784 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 12:19:07.923792 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:19:07.923799 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 12:19:07.923806 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 12:19:07.923814 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:19:07.923822 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 12:19:07.923834 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 12:19:07.923842 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 12:19:07.923879 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:19:07.923887 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:19:07.923906 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:19:07.923914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:19:07.923921 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:19:07.923929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:19:07.923936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:19:07.923947 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:19:07.923954 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:19:07.923962 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:19:07.923969 kernel: TSC deadline timer available Jan 17 12:19:07.923976 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 12:19:07.923984 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:19:07.923991 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 12:19:07.923998 kernel: kvm-guest: setup PV sched yield Jan 17 12:19:07.924005 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 12:19:07.924015 kernel: Booting paravirtualized kernel on KVM Jan 17 12:19:07.924023 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:19:07.924030 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 12:19:07.924037 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 17 12:19:07.924045 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 17 12:19:07.924052 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 12:19:07.924059 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:19:07.924067 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:19:07.924075 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:19:07.924088 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:19:07.924096 kernel: random: crng init done Jan 17 12:19:07.924103 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:19:07.924111 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:19:07.924118 kernel: Fallback order for Node 0: 0 Jan 17 12:19:07.924125 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 12:19:07.924132 kernel: Policy zone: DMA32 Jan 17 12:19:07.924140 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:19:07.924147 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 171124K reserved, 0K cma-reserved) Jan 17 12:19:07.924157 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 12:19:07.924165 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:19:07.924172 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:19:07.924179 kernel: Dynamic Preempt: voluntary Jan 17 12:19:07.924195 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:19:07.924211 kernel: rcu: RCU event tracing is enabled. Jan 17 12:19:07.924219 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 12:19:07.924227 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:19:07.924235 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:19:07.924243 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:19:07.924250 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:19:07.924261 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 12:19:07.924270 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 12:19:07.924283 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:19:07.924290 kernel: Console: colour dummy device 80x25 Jan 17 12:19:07.924298 kernel: printk: console [ttyS0] enabled Jan 17 12:19:07.924309 kernel: ACPI: Core revision 20230628 Jan 17 12:19:07.924317 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:19:07.924324 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:19:07.924332 kernel: x2apic enabled Jan 17 12:19:07.924340 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:19:07.924348 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 12:19:07.924356 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 12:19:07.924365 kernel: kvm-guest: setup PV IPIs Jan 17 12:19:07.924382 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:19:07.924393 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:19:07.924400 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 17 12:19:07.924408 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 12:19:07.924416 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 12:19:07.924423 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 12:19:07.924432 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:19:07.924439 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:19:07.924447 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:19:07.924457 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:19:07.924469 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 17 12:19:07.924477 kernel: RETBleed: Mitigation: untrained return thunk Jan 17 12:19:07.924485 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:19:07.924493 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:19:07.924503 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 12:19:07.924511 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 12:19:07.924519 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 12:19:07.924527 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:19:07.924538 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:19:07.924549 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:19:07.924560 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:19:07.924570 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:19:07.924581 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:19:07.924588 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:19:07.924596 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:19:07.924604 kernel: landlock: Up and running. Jan 17 12:19:07.924611 kernel: SELinux: Initializing. Jan 17 12:19:07.924619 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:19:07.924630 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:19:07.924638 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 17 12:19:07.924646 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:19:07.924654 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:19:07.924662 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:19:07.924669 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 12:19:07.924677 kernel: ... version: 0 Jan 17 12:19:07.924685 kernel: ... bit width: 48 Jan 17 12:19:07.924695 kernel: ... generic registers: 6 Jan 17 12:19:07.924703 kernel: ... value mask: 0000ffffffffffff Jan 17 12:19:07.924710 kernel: ... max period: 00007fffffffffff Jan 17 12:19:07.924718 kernel: ... fixed-purpose events: 0 Jan 17 12:19:07.924725 kernel: ... event mask: 000000000000003f Jan 17 12:19:07.924733 kernel: signal: max sigframe size: 1776 Jan 17 12:19:07.924741 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:19:07.924750 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:19:07.924761 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:19:07.924771 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:19:07.924779 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 12:19:07.924787 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 12:19:07.924794 kernel: smpboot: Max logical packages: 1 Jan 17 12:19:07.924802 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 17 12:19:07.924810 kernel: devtmpfs: initialized Jan 17 12:19:07.924818 kernel: x86/mm: Memory block size: 128MB Jan 17 12:19:07.924828 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 12:19:07.924837 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 12:19:07.924845 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 12:19:07.924878 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 12:19:07.924886 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 12:19:07.924894 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:19:07.924901 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 12:19:07.924909 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:19:07.924918 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:19:07.924928 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:19:07.924936 kernel: audit: type=2000 audit(1737116347.510:1): state=initialized audit_enabled=0 res=1 Jan 17 12:19:07.924947 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:19:07.924955 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:19:07.924963 kernel: cpuidle: using governor menu Jan 17 12:19:07.924970 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:19:07.924978 kernel: dca service started, version 1.12.1 Jan 17 12:19:07.924986 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 12:19:07.924994 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 12:19:07.925001 kernel: PCI: Using configuration type 1 for base access Jan 17 12:19:07.925009 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:19:07.925019 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:19:07.925027 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:19:07.925035 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:19:07.925043 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:19:07.925050 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:19:07.925058 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:19:07.925065 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:19:07.925073 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:19:07.925081 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:19:07.925091 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:19:07.925098 kernel: ACPI: Interpreter enabled Jan 17 12:19:07.925106 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:19:07.925114 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:19:07.925122 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:19:07.925129 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:19:07.925137 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 12:19:07.925145 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:19:07.925499 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:19:07.925651 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 12:19:07.925780 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 12:19:07.925790 kernel: PCI host bridge to bus 0000:00 Jan 17 12:19:07.925952 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:19:07.926073 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:19:07.926187 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:19:07.926307 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 12:19:07.926434 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:19:07.926547 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 12:19:07.926675 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:19:07.926920 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 12:19:07.927111 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 12:19:07.927302 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 12:19:07.927467 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 12:19:07.927620 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 12:19:07.927748 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 12:19:07.927894 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:19:07.928052 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:19:07.928180 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 12:19:07.928310 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 12:19:07.928521 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 12:19:07.928693 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:19:07.928823 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 12:19:07.928974 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 12:19:07.929101 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 12:19:07.929243 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:19:07.929426 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 12:19:07.929557 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 12:19:07.929683 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 12:19:07.929817 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 12:19:07.929980 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 12:19:07.930108 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 12:19:07.930247 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 12:19:07.930388 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 12:19:07.930515 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 12:19:07.930657 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 12:19:07.930787 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 12:19:07.930798 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:19:07.930806 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:19:07.930813 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:19:07.930821 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:19:07.930833 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 12:19:07.930841 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 12:19:07.930861 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 12:19:07.930869 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 12:19:07.930877 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 12:19:07.930885 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 12:19:07.930893 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 12:19:07.930901 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 12:19:07.930909 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 12:19:07.930920 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 12:19:07.930928 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 12:19:07.930936 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 12:19:07.930944 kernel: iommu: Default domain type: Translated Jan 17 12:19:07.930952 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:19:07.930960 kernel: efivars: Registered efivars operations Jan 17 12:19:07.930968 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:19:07.930976 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:19:07.930984 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 12:19:07.930995 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 12:19:07.931002 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 12:19:07.931010 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 12:19:07.931157 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 12:19:07.931301 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 12:19:07.931440 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:19:07.931451 kernel: vgaarb: loaded Jan 17 12:19:07.931459 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:19:07.931471 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:19:07.931479 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:19:07.931487 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:19:07.931495 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:19:07.931503 kernel: pnp: PnP ACPI init Jan 17 12:19:07.931656 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 12:19:07.931668 kernel: pnp: PnP ACPI: found 6 devices Jan 17 12:19:07.931676 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:19:07.931688 kernel: NET: Registered PF_INET protocol family Jan 17 12:19:07.931696 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:19:07.931704 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:19:07.931712 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:19:07.931720 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:19:07.931728 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:19:07.931736 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:19:07.931743 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:19:07.931751 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:19:07.931762 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:19:07.931769 kernel: NET: Registered PF_XDP protocol family Jan 17 12:19:07.931924 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 12:19:07.932080 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 12:19:07.932200 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:19:07.932315 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:19:07.932440 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:19:07.932556 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 12:19:07.932677 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 12:19:07.932791 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 12:19:07.932801 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:19:07.932809 kernel: Initialise system trusted keyrings Jan 17 12:19:07.932817 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:19:07.932825 kernel: Key type asymmetric registered Jan 17 12:19:07.932833 kernel: Asymmetric key parser 'x509' registered Jan 17 12:19:07.932841 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:19:07.932862 kernel: io scheduler mq-deadline registered Jan 17 12:19:07.932874 kernel: io scheduler kyber registered Jan 17 12:19:07.932882 kernel: io scheduler bfq registered Jan 17 12:19:07.932890 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:19:07.932898 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 12:19:07.932906 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 12:19:07.932914 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 12:19:07.932921 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:19:07.932929 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:19:07.932937 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:19:07.932948 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:19:07.932956 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:19:07.933098 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 12:19:07.933109 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:19:07.933243 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 12:19:07.933382 kernel: rtc_cmos 00:04: setting system clock to 2025-01-17T12:19:07 UTC (1737116347) Jan 17 12:19:07.933504 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 12:19:07.933515 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:19:07.933528 kernel: efifb: probing for efifb Jan 17 12:19:07.933536 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 12:19:07.933544 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 12:19:07.933551 kernel: efifb: scrolling: redraw Jan 17 12:19:07.933559 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 12:19:07.933567 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 12:19:07.933592 kernel: fb0: EFI VGA frame buffer device Jan 17 12:19:07.933602 kernel: pstore: Using crash dump compression: deflate Jan 17 12:19:07.933610 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:19:07.933621 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:19:07.933629 kernel: Segment Routing with IPv6 Jan 17 12:19:07.933637 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:19:07.933645 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:19:07.933653 kernel: Key type dns_resolver registered Jan 17 12:19:07.933661 kernel: IPI shorthand broadcast: enabled Jan 17 12:19:07.933669 kernel: sched_clock: Marking stable (978003324, 116152399)->(1168414614, -74258891) Jan 17 12:19:07.933677 kernel: registered taskstats version 1 Jan 17 12:19:07.933685 kernel: Loading compiled-in X.509 certificates Jan 17 12:19:07.933696 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:19:07.933704 kernel: Key type .fscrypt registered Jan 17 12:19:07.933711 kernel: Key type fscrypt-provisioning registered Jan 17 12:19:07.933719 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:19:07.933727 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:19:07.933735 kernel: ima: No architecture policies found Jan 17 12:19:07.933743 kernel: clk: Disabling unused clocks Jan 17 12:19:07.933752 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:19:07.933760 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:19:07.933771 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:19:07.933779 kernel: Run /init as init process Jan 17 12:19:07.933787 kernel: with arguments: Jan 17 12:19:07.933795 kernel: /init Jan 17 12:19:07.933802 kernel: with environment: Jan 17 12:19:07.933810 kernel: HOME=/ Jan 17 12:19:07.933818 kernel: TERM=linux Jan 17 12:19:07.933827 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:19:07.933864 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:19:07.933880 systemd[1]: Detected virtualization kvm. Jan 17 12:19:07.933892 systemd[1]: Detected architecture x86-64. Jan 17 12:19:07.933901 systemd[1]: Running in initrd. Jan 17 12:19:07.933916 systemd[1]: No hostname configured, using default hostname. Jan 17 12:19:07.933924 systemd[1]: Hostname set to . Jan 17 12:19:07.933933 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:19:07.933941 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:19:07.933951 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:19:07.933959 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:19:07.933968 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:19:07.933977 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:19:07.933989 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:19:07.933998 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:19:07.934008 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:19:07.934017 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:19:07.934026 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:19:07.934034 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:19:07.934043 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:19:07.934054 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:19:07.934062 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:19:07.934071 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:19:07.934079 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:19:07.934088 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:19:07.934096 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:19:07.934105 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:19:07.934114 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:19:07.934122 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:19:07.934167 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:19:07.934184 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:19:07.934196 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:19:07.934208 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:19:07.934219 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:19:07.934229 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:19:07.934241 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:19:07.934252 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:19:07.934267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:07.934275 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:19:07.934284 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:19:07.934293 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:19:07.934326 systemd-journald[193]: Collecting audit messages is disabled. Jan 17 12:19:07.934349 systemd-journald[193]: Journal started Jan 17 12:19:07.934367 systemd-journald[193]: Runtime Journal (/run/log/journal/d268b23c67904bec9270accaeb144078) is 6.0M, max 48.3M, 42.2M free. Jan 17 12:19:07.938367 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:19:07.927227 systemd-modules-load[194]: Inserted module 'overlay' Jan 17 12:19:07.944517 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:19:07.945164 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:07.948579 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:19:07.956878 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:19:07.959053 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 17 12:19:07.960088 kernel: Bridge firewalling registered Jan 17 12:19:07.966036 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:19:07.969280 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:19:07.982129 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:19:07.984967 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:19:07.988591 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:19:07.991133 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:19:07.999057 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:19:08.005084 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:19:08.005463 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:19:08.007254 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:19:08.009966 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:19:08.024182 dracut-cmdline[226]: dracut-dracut-053 Jan 17 12:19:08.027141 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:19:08.047807 systemd-resolved[229]: Positive Trust Anchors: Jan 17 12:19:08.047826 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:19:08.048291 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:19:08.051742 systemd-resolved[229]: Defaulting to hostname 'linux'. Jan 17 12:19:08.053225 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:19:08.061158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:19:08.138902 kernel: SCSI subsystem initialized Jan 17 12:19:08.149875 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:19:08.162879 kernel: iscsi: registered transport (tcp) Jan 17 12:19:08.189908 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:19:08.190001 kernel: QLogic iSCSI HBA Driver Jan 17 12:19:08.273087 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:19:08.286058 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:19:08.317517 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:19:08.317605 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:19:08.318821 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:19:08.362903 kernel: raid6: avx2x4 gen() 29532 MB/s Jan 17 12:19:08.379884 kernel: raid6: avx2x2 gen() 30751 MB/s Jan 17 12:19:08.397030 kernel: raid6: avx2x1 gen() 25782 MB/s Jan 17 12:19:08.397100 kernel: raid6: using algorithm avx2x2 gen() 30751 MB/s Jan 17 12:19:08.415038 kernel: raid6: .... xor() 19541 MB/s, rmw enabled Jan 17 12:19:08.415152 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:19:08.435899 kernel: xor: automatically using best checksumming function avx Jan 17 12:19:08.604897 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:19:08.621615 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:19:08.633246 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:19:08.652280 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 17 12:19:08.658776 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:19:08.674187 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:19:08.690895 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jan 17 12:19:08.730729 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:19:08.740274 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:19:08.815500 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:19:08.823033 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:19:08.844149 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:19:08.847687 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:19:08.849671 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:19:08.851042 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:19:08.858106 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 12:19:08.884730 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 12:19:08.889453 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:19:08.889489 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:19:08.889504 kernel: GPT:9289727 != 19775487 Jan 17 12:19:08.889518 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:19:08.889532 kernel: GPT:9289727 != 19775487 Jan 17 12:19:08.889546 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:19:08.889560 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:19:08.866432 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:19:08.884406 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:19:08.888424 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:19:08.888553 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:19:08.901882 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:19:08.901916 kernel: AES CTR mode by8 optimization enabled Jan 17 12:19:08.892671 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:19:08.894022 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:19:08.894171 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:08.898969 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:08.909563 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:08.924896 kernel: libata version 3.00 loaded. Jan 17 12:19:08.929013 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:19:08.929182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:08.936380 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 12:19:08.974339 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 12:19:08.974381 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 12:19:08.974627 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 12:19:08.974824 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (479) Jan 17 12:19:08.974841 kernel: scsi host0: ahci Jan 17 12:19:08.975117 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (468) Jan 17 12:19:08.975135 kernel: scsi host1: ahci Jan 17 12:19:08.975339 kernel: scsi host2: ahci Jan 17 12:19:08.975557 kernel: scsi host3: ahci Jan 17 12:19:08.975755 kernel: scsi host4: ahci Jan 17 12:19:08.976041 kernel: scsi host5: ahci Jan 17 12:19:08.976241 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 12:19:08.976257 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 12:19:08.976271 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 12:19:08.976285 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 12:19:08.976299 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 12:19:08.976319 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 12:19:08.957221 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:19:08.966465 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:19:08.980904 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:19:08.988173 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:19:08.988309 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:19:09.013221 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:19:09.015771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:09.035943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:09.049831 disk-uuid[567]: Primary Header is updated. Jan 17 12:19:09.049831 disk-uuid[567]: Secondary Entries is updated. Jan 17 12:19:09.049831 disk-uuid[567]: Secondary Header is updated. Jan 17 12:19:09.051126 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:19:09.057884 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:19:09.063902 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:19:09.073738 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:19:09.286900 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:19:09.287031 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 12:19:09.287891 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 12:19:09.288887 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:19:09.289884 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 12:19:09.290884 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:19:09.291891 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 12:19:09.291906 kernel: ata3.00: applying bridge limits Jan 17 12:19:09.293009 kernel: ata3.00: configured for UDMA/100 Jan 17 12:19:09.293894 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 12:19:09.339000 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 12:19:09.359012 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:19:09.359036 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:19:10.065595 disk-uuid[572]: The operation has completed successfully. Jan 17 12:19:10.067088 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:19:10.097368 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:19:10.097499 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:19:10.118994 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:19:10.122501 sh[597]: Success Jan 17 12:19:10.136873 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 12:19:10.175233 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:19:10.186053 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:19:10.189792 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:19:10.200545 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:19:10.200582 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:19:10.200593 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:19:10.202744 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:19:10.202775 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:19:10.207917 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:19:10.209761 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:19:10.220044 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:19:10.221966 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:19:10.231385 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:19:10.231412 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:19:10.231423 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:19:10.234942 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:19:10.244524 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:19:10.246496 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:19:10.256009 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:19:10.263999 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:19:10.333011 ignition[689]: Ignition 2.19.0 Jan 17 12:19:10.333775 ignition[689]: Stage: fetch-offline Jan 17 12:19:10.333833 ignition[689]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:10.333864 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:19:10.333990 ignition[689]: parsed url from cmdline: "" Jan 17 12:19:10.333995 ignition[689]: no config URL provided Jan 17 12:19:10.334001 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:19:10.334013 ignition[689]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:19:10.334049 ignition[689]: op(1): [started] loading QEMU firmware config module Jan 17 12:19:10.334056 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 12:19:10.343770 ignition[689]: op(1): [finished] loading QEMU firmware config module Jan 17 12:19:10.354353 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:19:10.362477 ignition[689]: parsing config with SHA512: 2c7695a72a18f7d112b802f5272f52b2b34846d119e18ef36804280eb0805ab4df577f82d00c2062d6f2225747146c17fd8d5a35d2e8707749bb4ae8286addf8 Jan 17 12:19:10.366936 unknown[689]: fetched base config from "system" Jan 17 12:19:10.366957 unknown[689]: fetched user config from "qemu" Jan 17 12:19:10.367442 ignition[689]: fetch-offline: fetch-offline passed Jan 17 12:19:10.367524 ignition[689]: Ignition finished successfully Jan 17 12:19:10.369377 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:19:10.371821 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:19:10.399495 systemd-networkd[785]: lo: Link UP Jan 17 12:19:10.399509 systemd-networkd[785]: lo: Gained carrier Jan 17 12:19:10.401681 systemd-networkd[785]: Enumeration completed Jan 17 12:19:10.401802 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:19:10.402279 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:19:10.402284 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:19:10.404103 systemd[1]: Reached target network.target - Network. Jan 17 12:19:10.404662 systemd-networkd[785]: eth0: Link UP Jan 17 12:19:10.404667 systemd-networkd[785]: eth0: Gained carrier Jan 17 12:19:10.404677 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:19:10.406355 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:19:10.419047 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:19:10.422914 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:19:10.433467 ignition[788]: Ignition 2.19.0 Jan 17 12:19:10.433480 ignition[788]: Stage: kargs Jan 17 12:19:10.433690 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:10.433702 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:19:10.434571 ignition[788]: kargs: kargs passed Jan 17 12:19:10.434628 ignition[788]: Ignition finished successfully Jan 17 12:19:10.439232 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:19:10.451286 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:19:10.469624 ignition[797]: Ignition 2.19.0 Jan 17 12:19:10.469638 ignition[797]: Stage: disks Jan 17 12:19:10.469896 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:10.469913 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:19:10.471052 ignition[797]: disks: disks passed Jan 17 12:19:10.471112 ignition[797]: Ignition finished successfully Jan 17 12:19:10.477863 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:19:10.479246 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:19:10.481324 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:19:10.482711 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:19:10.482774 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:19:10.483138 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:19:10.496070 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:19:10.521086 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:19:10.673371 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:19:10.688066 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:19:10.789920 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:19:10.791343 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:19:10.794101 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:19:10.813034 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:19:10.815928 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:19:10.819340 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:19:10.819409 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:19:10.819439 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:19:10.827878 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Jan 17 12:19:10.827929 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:19:10.829569 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:19:10.829592 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:19:10.831189 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:19:10.834317 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:19:10.836479 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:19:10.850085 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:19:10.887515 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:19:10.892304 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:19:10.897947 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:19:10.911085 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:19:11.006384 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:19:11.015023 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:19:11.018663 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:19:11.025876 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:19:11.070211 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:19:11.146233 ignition[930]: INFO : Ignition 2.19.0 Jan 17 12:19:11.146233 ignition[930]: INFO : Stage: mount Jan 17 12:19:11.149056 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:11.149056 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:19:11.149056 ignition[930]: INFO : mount: mount passed Jan 17 12:19:11.149056 ignition[930]: INFO : Ignition finished successfully Jan 17 12:19:11.154911 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:19:11.169224 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:19:11.200261 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:19:11.213181 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:19:11.225592 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Jan 17 12:19:11.225666 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:19:11.225684 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:19:11.227377 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:19:11.230881 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:19:11.232647 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:19:11.318895 ignition[959]: INFO : Ignition 2.19.0 Jan 17 12:19:11.318895 ignition[959]: INFO : Stage: files Jan 17 12:19:11.320936 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:11.320936 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:19:11.324160 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:19:11.326628 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:19:11.326628 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:19:11.331099 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:19:11.332953 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:19:11.334844 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:19:11.333556 unknown[959]: wrote ssh authorized keys file for user: core Jan 17 12:19:11.338571 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:19:11.338571 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:19:11.377844 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:19:11.512776 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:19:11.512776 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:19:11.517560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:19:11.517560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:19:11.517560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:19:11.517560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:19:11.517560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:19:11.517560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:19:11.517560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:19:11.517560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:19:11.517560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:19:11.517560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:19:11.517560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:19:11.517560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:19:11.517560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:19:11.728104 systemd-networkd[785]: eth0: Gained IPv6LL Jan 17 12:19:11.860733 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:19:12.565922 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:19:12.565922 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:19:12.570057 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:19:12.570057 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:19:12.570057 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:19:12.570057 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 17 12:19:12.570057 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:19:12.570057 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:19:12.570057 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 17 12:19:12.570057 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:19:12.596080 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:19:12.601918 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:19:12.603561 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:19:12.603561 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:19:12.603561 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:19:12.603561 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:19:12.603561 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:19:12.603561 ignition[959]: INFO : files: files passed Jan 17 12:19:12.603561 ignition[959]: INFO : Ignition finished successfully Jan 17 12:19:12.605938 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:19:12.617058 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:19:12.619881 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:19:12.622463 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:19:12.622580 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:19:12.631497 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 12:19:12.634444 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:19:12.634444 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:19:12.638356 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:19:12.641216 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:19:12.644938 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:19:12.657113 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:19:12.684303 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:19:12.684439 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:19:12.686901 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:19:12.687989 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:19:12.688366 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:19:12.689367 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:19:12.718353 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:19:12.727136 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:19:12.744880 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:19:12.745173 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:19:12.751260 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:19:12.751468 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:19:12.751647 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:19:12.757600 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:19:12.757794 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:19:12.761086 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:19:12.763258 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:19:12.765531 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:19:12.766805 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:19:12.770424 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:19:12.773029 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:19:12.775533 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:19:12.777919 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:19:12.779069 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:19:12.779273 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:19:12.781436 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:19:12.781823 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:19:12.782176 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:19:12.782385 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:19:12.792829 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:19:12.793069 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:19:12.795955 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:19:12.796124 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:19:12.799443 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:19:12.799580 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:19:12.799740 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:19:12.801320 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:19:12.801645 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:19:12.802155 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:19:12.802300 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:19:12.808433 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:19:12.808564 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:19:12.811466 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:19:12.811629 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:19:12.812628 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:19:12.812802 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:19:12.826128 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:19:12.828780 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:19:12.828902 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:19:12.829043 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:19:12.831234 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:19:12.831392 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:19:12.838872 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:19:12.839055 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:19:12.861292 ignition[1013]: INFO : Ignition 2.19.0 Jan 17 12:19:12.861292 ignition[1013]: INFO : Stage: umount Jan 17 12:19:12.870703 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:12.870703 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:19:12.870703 ignition[1013]: INFO : umount: umount passed Jan 17 12:19:12.870703 ignition[1013]: INFO : Ignition finished successfully Jan 17 12:19:12.868165 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:19:12.868329 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:19:12.870227 systemd[1]: Stopped target network.target - Network. Jan 17 12:19:12.871627 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:19:12.871697 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:19:12.874782 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:19:12.874835 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:19:12.876618 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:19:12.876679 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:19:12.877569 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:19:12.877621 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:19:12.878075 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:19:12.883066 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:19:12.886964 systemd-networkd[785]: eth0: DHCPv6 lease lost Jan 17 12:19:12.889121 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:19:12.889279 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:19:12.891108 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:19:12.891196 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:19:12.899964 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:19:12.901202 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:19:12.901339 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:19:12.905091 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:19:12.909379 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:19:12.909534 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:19:12.914999 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:19:12.915089 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:19:12.916501 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:19:12.916553 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:19:12.917471 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:19:12.917536 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:19:12.923950 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:19:12.924094 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:19:12.926096 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:19:12.926279 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:19:12.929193 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:19:12.929344 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:19:12.930188 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:19:12.930250 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:19:12.930488 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:19:12.930538 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:19:12.931351 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:19:12.931406 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:19:12.939579 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:19:12.939718 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:19:12.953086 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:19:12.953213 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:19:12.953305 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:19:12.956730 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:19:12.956786 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:19:12.959208 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:19:12.959274 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:19:12.961928 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:19:12.961984 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:12.966315 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:19:12.979956 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:19:12.980137 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:19:13.141327 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:19:13.141487 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:19:13.144447 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:19:13.146524 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:19:13.147492 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:19:13.164022 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:19:13.171481 systemd[1]: Switching root. Jan 17 12:19:13.205430 systemd-journald[193]: Journal stopped Jan 17 12:19:14.647924 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 17 12:19:14.648013 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:19:14.652996 kernel: SELinux: policy capability open_perms=1 Jan 17 12:19:14.653026 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:19:14.653042 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:19:14.653063 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:19:14.653079 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:19:14.653094 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:19:14.653110 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:19:14.653125 kernel: audit: type=1403 audit(1737116353.863:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:19:14.653153 systemd[1]: Successfully loaded SELinux policy in 42.909ms. Jan 17 12:19:14.653181 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.940ms. Jan 17 12:19:14.653207 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:19:14.653225 systemd[1]: Detected virtualization kvm. Jan 17 12:19:14.653246 systemd[1]: Detected architecture x86-64. Jan 17 12:19:14.653262 systemd[1]: Detected first boot. Jan 17 12:19:14.653279 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:19:14.653296 zram_generator::config[1057]: No configuration found. Jan 17 12:19:14.653314 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:19:14.653330 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:19:14.653347 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:19:14.653363 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:19:14.653391 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:19:14.653409 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:19:14.653425 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:19:14.653442 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:19:14.653458 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:19:14.653475 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:19:14.653492 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:19:14.653515 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:19:14.653536 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:19:14.653555 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:19:14.653571 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:19:14.653588 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:19:14.653604 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:19:14.653632 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:19:14.653649 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:19:14.653665 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:19:14.653682 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:19:14.653702 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:19:14.653718 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:19:14.653735 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:19:14.653752 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:19:14.653768 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:19:14.653784 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:19:14.653801 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:19:14.653817 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:19:14.653837 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:19:14.653871 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:19:14.653888 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:19:14.653906 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:19:14.653930 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:19:14.653950 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:19:14.653966 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:19:14.653983 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:19:14.654000 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:14.654021 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:19:14.654037 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:19:14.654054 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:19:14.654071 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:19:14.654088 systemd[1]: Reached target machines.target - Containers. Jan 17 12:19:14.654105 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:19:14.654121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:19:14.654138 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:19:14.654155 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:19:14.654174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:19:14.654199 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:19:14.654217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:19:14.654234 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:19:14.654250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:19:14.654267 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:19:14.654284 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:19:14.654302 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:19:14.654326 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:19:14.654343 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:19:14.654360 kernel: fuse: init (API version 7.39) Jan 17 12:19:14.654376 kernel: loop: module loaded Jan 17 12:19:14.654391 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:19:14.654407 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:19:14.654424 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:19:14.654471 systemd-journald[1124]: Collecting audit messages is disabled. Jan 17 12:19:14.654506 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:19:14.654523 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:19:14.654540 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:19:14.654557 systemd[1]: Stopped verity-setup.service. Jan 17 12:19:14.654576 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:14.654593 systemd-journald[1124]: Journal started Jan 17 12:19:14.654622 systemd-journald[1124]: Runtime Journal (/run/log/journal/d268b23c67904bec9270accaeb144078) is 6.0M, max 48.3M, 42.2M free. Jan 17 12:19:14.419500 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:19:14.439106 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:19:14.439612 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:19:14.657673 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:19:14.660602 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:19:14.661874 kernel: ACPI: bus type drm_connector registered Jan 17 12:19:14.662759 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:19:14.664575 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:19:14.666025 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:19:14.667633 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:19:14.669258 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:19:14.670902 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:19:14.672798 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:19:14.674947 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:19:14.675184 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:19:14.677144 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:19:14.677385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:19:14.679163 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:19:14.679401 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:19:14.681144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:19:14.681388 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:19:14.683429 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:19:14.683659 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:19:14.685411 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:19:14.685638 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:19:14.687600 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:19:14.689359 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:19:14.691487 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:19:14.708380 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:19:14.715088 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:19:14.718454 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:19:14.719898 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:19:14.719944 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:19:14.722798 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:19:14.725651 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:19:14.730346 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:19:14.732090 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:19:14.734571 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:19:14.739707 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:19:14.741317 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:19:14.746578 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:19:14.748007 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:19:14.751273 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:19:14.757790 systemd-journald[1124]: Time spent on flushing to /var/log/journal/d268b23c67904bec9270accaeb144078 is 19.714ms for 993 entries. Jan 17 12:19:14.757790 systemd-journald[1124]: System Journal (/var/log/journal/d268b23c67904bec9270accaeb144078) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:19:14.790529 systemd-journald[1124]: Received client request to flush runtime journal. Jan 17 12:19:14.757740 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:19:14.767021 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:19:14.772724 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:19:14.778325 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:19:14.779951 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:19:14.787233 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:19:14.789222 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:19:14.794084 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 12:19:14.793964 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:19:14.801310 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:19:14.815269 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:19:14.818523 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:19:14.821596 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:19:14.827351 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:19:14.836450 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Jan 17 12:19:14.836477 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Jan 17 12:19:14.836497 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:19:14.846734 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:19:14.847814 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:19:14.849775 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:19:14.857466 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:19:14.860876 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 12:19:14.898357 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:19:14.906334 kernel: loop2: detected capacity change from 0 to 211296 Jan 17 12:19:14.914152 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:19:14.972152 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 17 12:19:14.972176 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 17 12:19:14.972876 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 12:19:14.980039 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:19:14.996887 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 12:19:15.031894 kernel: loop5: detected capacity change from 0 to 211296 Jan 17 12:19:15.041270 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 12:19:15.041932 (sd-merge)[1198]: Merged extensions into '/usr'. Jan 17 12:19:15.047126 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:19:15.047145 systemd[1]: Reloading... Jan 17 12:19:15.156884 zram_generator::config[1228]: No configuration found. Jan 17 12:19:15.298297 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:19:15.301988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:15.360517 systemd[1]: Reloading finished in 312 ms. Jan 17 12:19:15.392924 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:19:15.394732 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:19:15.424253 systemd[1]: Starting ensure-sysext.service... Jan 17 12:19:15.426774 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:19:15.433575 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:19:15.433598 systemd[1]: Reloading... Jan 17 12:19:15.466620 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:19:15.467102 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:19:15.469341 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:19:15.469992 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jan 17 12:19:15.470158 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jan 17 12:19:15.518055 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:19:15.518073 systemd-tmpfiles[1263]: Skipping /boot Jan 17 12:19:15.536664 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:19:15.536682 systemd-tmpfiles[1263]: Skipping /boot Jan 17 12:19:15.561882 zram_generator::config[1288]: No configuration found. Jan 17 12:19:15.693546 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:15.743279 systemd[1]: Reloading finished in 309 ms. Jan 17 12:19:15.762647 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:19:15.774631 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:19:15.785271 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:19:15.787920 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:19:15.790416 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:19:15.794952 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:19:15.800505 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:19:15.805763 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:19:15.810218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:15.810419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:19:15.813128 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:19:15.818336 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:19:15.822243 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:19:15.825277 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:19:15.831304 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:19:15.832412 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:15.833652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:19:15.835090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:19:15.837176 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:19:15.837446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:19:15.839447 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:19:15.839744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:19:15.845295 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:19:15.845781 augenrules[1353]: No rules Jan 17 12:19:15.847327 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:19:15.848769 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Jan 17 12:19:15.853844 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:19:15.858761 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:15.858964 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:19:15.866209 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:19:15.869308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:19:15.872615 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:19:15.873821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:19:15.877212 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:19:15.878396 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:15.879350 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:19:15.881511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:19:15.881708 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:19:15.883409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:19:15.883597 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:19:15.892274 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:19:15.892483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:19:15.895241 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:19:15.896971 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:19:15.908281 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:19:15.916527 systemd[1]: Finished ensure-sysext.service. Jan 17 12:19:15.920993 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:15.921163 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:19:15.929194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:19:15.936105 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:19:15.942114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:19:15.946064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:19:15.947238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:19:15.950043 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:19:15.954028 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:19:15.955176 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:19:15.955206 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:15.955813 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:19:15.956090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:19:15.957705 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:19:15.957912 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:19:15.958650 systemd-resolved[1332]: Positive Trust Anchors: Jan 17 12:19:15.958659 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:19:15.958690 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:19:15.959440 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:19:15.959611 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:19:15.962907 systemd-resolved[1332]: Defaulting to hostname 'linux'. Jan 17 12:19:15.969183 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:19:15.973410 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:19:15.973637 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:19:15.979178 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:19:15.979969 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:19:15.981737 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:19:15.981880 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:19:15.996966 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1380) Jan 17 12:19:16.041221 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:19:16.055139 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 12:19:16.060149 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:19:16.062877 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:19:16.074591 systemd-networkd[1402]: lo: Link UP Jan 17 12:19:16.074615 systemd-networkd[1402]: lo: Gained carrier Jan 17 12:19:16.076431 systemd-networkd[1402]: Enumeration completed Jan 17 12:19:16.077025 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:19:16.077193 systemd[1]: Reached target network.target - Network. Jan 17 12:19:16.077572 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:19:16.077582 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:19:16.078909 systemd-networkd[1402]: eth0: Link UP Jan 17 12:19:16.078913 systemd-networkd[1402]: eth0: Gained carrier Jan 17 12:19:16.078925 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:19:16.081159 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 12:19:16.097025 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 12:19:16.097513 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 12:19:16.097719 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 12:19:16.099124 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:19:16.099485 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:19:16.100629 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Jan 17 12:19:16.101569 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:19:17.073223 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:19:17.073417 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 12:19:17.073489 systemd-timesyncd[1403]: Initial clock synchronization to Fri 2025-01-17 12:19:17.072898 UTC. Jan 17 12:19:17.073675 systemd-resolved[1332]: Clock change detected. Flushing caches. Jan 17 12:19:17.076318 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:19:17.082836 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 12:19:17.105823 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:19:17.110105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:17.114001 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:19:17.114258 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:17.120697 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:17.199505 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:17.216221 kernel: kvm_amd: TSC scaling supported Jan 17 12:19:17.216328 kernel: kvm_amd: Nested Virtualization enabled Jan 17 12:19:17.216343 kernel: kvm_amd: Nested Paging enabled Jan 17 12:19:17.217382 kernel: kvm_amd: LBR virtualization supported Jan 17 12:19:17.217403 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 12:19:17.218801 kernel: kvm_amd: Virtual GIF supported Jan 17 12:19:17.238813 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:19:17.278523 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:19:17.287997 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:19:17.298250 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:19:17.338292 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:19:17.339982 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:19:17.341123 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:19:17.342442 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:19:17.343794 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:19:17.345259 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:19:17.346477 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:19:17.347794 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:19:17.349090 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:19:17.349119 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:19:17.350002 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:19:17.351807 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:19:17.354631 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:19:17.362514 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:19:17.365574 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:19:17.367293 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:19:17.368624 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:19:17.369666 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:19:17.370717 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:19:17.370751 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:19:17.372136 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:19:17.374994 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:19:17.377966 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:19:17.381010 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:19:17.383530 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:19:17.384939 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:19:17.387594 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:19:17.389894 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:19:17.391709 jq[1443]: false Jan 17 12:19:17.393768 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:19:17.402061 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:19:17.409442 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:19:17.411204 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:19:17.411919 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:19:17.413862 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:19:17.417004 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:19:17.419636 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:19:17.424641 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:19:17.424928 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:19:17.428488 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:19:17.428980 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:19:17.429503 extend-filesystems[1444]: Found loop3 Jan 17 12:19:17.432126 extend-filesystems[1444]: Found loop4 Jan 17 12:19:17.432126 extend-filesystems[1444]: Found loop5 Jan 17 12:19:17.432126 extend-filesystems[1444]: Found sr0 Jan 17 12:19:17.432126 extend-filesystems[1444]: Found vda Jan 17 12:19:17.432126 extend-filesystems[1444]: Found vda1 Jan 17 12:19:17.432126 extend-filesystems[1444]: Found vda2 Jan 17 12:19:17.432126 extend-filesystems[1444]: Found vda3 Jan 17 12:19:17.432126 extend-filesystems[1444]: Found usr Jan 17 12:19:17.432126 extend-filesystems[1444]: Found vda4 Jan 17 12:19:17.432126 extend-filesystems[1444]: Found vda6 Jan 17 12:19:17.432126 extend-filesystems[1444]: Found vda7 Jan 17 12:19:17.432126 extend-filesystems[1444]: Found vda9 Jan 17 12:19:17.432126 extend-filesystems[1444]: Checking size of /dev/vda9 Jan 17 12:19:17.449827 jq[1455]: true Jan 17 12:19:17.452896 dbus-daemon[1442]: [system] SELinux support is enabled Jan 17 12:19:17.456113 update_engine[1454]: I20250117 12:19:17.450841 1454 main.cc:92] Flatcar Update Engine starting Jan 17 12:19:17.457222 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:19:17.466035 update_engine[1454]: I20250117 12:19:17.459203 1454 update_check_scheduler.cc:74] Next update check in 9m54s Jan 17 12:19:17.463482 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:19:17.463719 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:19:17.466403 extend-filesystems[1444]: Resized partition /dev/vda9 Jan 17 12:19:17.474901 extend-filesystems[1477]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:19:17.497236 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1369) Jan 17 12:19:17.497291 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 12:19:17.502142 tar[1458]: linux-amd64/helm Jan 17 12:19:17.477156 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:19:17.478182 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:19:17.502763 jq[1470]: true Jan 17 12:19:17.478207 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:19:17.482886 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:19:17.482905 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:19:17.486630 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:19:17.499085 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:19:17.527078 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 12:19:17.597872 extend-filesystems[1477]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:19:17.597872 extend-filesystems[1477]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:19:17.597872 extend-filesystems[1477]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 12:19:17.623050 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Jan 17 12:19:17.600038 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:19:17.600077 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:19:17.601585 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:19:17.601873 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:19:17.606927 systemd-logind[1451]: New seat seat0. Jan 17 12:19:17.614309 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:19:17.635562 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:19:17.653096 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:19:17.654611 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:19:17.658996 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:19:17.736742 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:19:17.764980 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:19:17.809302 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:19:17.820280 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:19:17.820672 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:19:17.830264 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:19:17.854307 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:19:17.865992 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:19:17.868756 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:19:17.870293 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:19:17.935389 containerd[1474]: time="2025-01-17T12:19:17.935228760Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:19:17.961369 containerd[1474]: time="2025-01-17T12:19:17.961316684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:17.963759 containerd[1474]: time="2025-01-17T12:19:17.963716845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:17.963759 containerd[1474]: time="2025-01-17T12:19:17.963746771Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:19:17.963759 containerd[1474]: time="2025-01-17T12:19:17.963763112Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:19:17.964026 containerd[1474]: time="2025-01-17T12:19:17.964002711Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:19:17.964026 containerd[1474]: time="2025-01-17T12:19:17.964023500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:17.964133 containerd[1474]: time="2025-01-17T12:19:17.964111365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:17.964133 containerd[1474]: time="2025-01-17T12:19:17.964128257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:17.964374 containerd[1474]: time="2025-01-17T12:19:17.964350043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:17.964374 containerd[1474]: time="2025-01-17T12:19:17.964369099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:17.964496 containerd[1474]: time="2025-01-17T12:19:17.964382554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:17.964496 containerd[1474]: time="2025-01-17T12:19:17.964392933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:17.964556 containerd[1474]: time="2025-01-17T12:19:17.964500705Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:17.964799 containerd[1474]: time="2025-01-17T12:19:17.964764741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:17.964937 containerd[1474]: time="2025-01-17T12:19:17.964913319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:17.964937 containerd[1474]: time="2025-01-17T12:19:17.964933547Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:19:17.965091 containerd[1474]: time="2025-01-17T12:19:17.965070975Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:19:17.965161 containerd[1474]: time="2025-01-17T12:19:17.965142609Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:19:17.972087 containerd[1474]: time="2025-01-17T12:19:17.972045655Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:19:17.972136 containerd[1474]: time="2025-01-17T12:19:17.972125084Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:19:17.972136 containerd[1474]: time="2025-01-17T12:19:17.972143158Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:19:17.972216 containerd[1474]: time="2025-01-17T12:19:17.972159298Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:19:17.972216 containerd[1474]: time="2025-01-17T12:19:17.972173024Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:19:17.972370 containerd[1474]: time="2025-01-17T12:19:17.972315431Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:19:17.972574 containerd[1474]: time="2025-01-17T12:19:17.972546083Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:19:17.972713 containerd[1474]: time="2025-01-17T12:19:17.972669435Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:19:17.972713 containerd[1474]: time="2025-01-17T12:19:17.972691256Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:19:17.972713 containerd[1474]: time="2025-01-17T12:19:17.972704621Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:19:17.972824 containerd[1474]: time="2025-01-17T12:19:17.972717675Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:19:17.972824 containerd[1474]: time="2025-01-17T12:19:17.972732763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:19:17.972824 containerd[1474]: time="2025-01-17T12:19:17.972745056Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:19:17.972824 containerd[1474]: time="2025-01-17T12:19:17.972758331Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:19:17.972824 containerd[1474]: time="2025-01-17T12:19:17.972773269Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:19:17.972824 containerd[1474]: time="2025-01-17T12:19:17.972801613Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:19:17.972824 containerd[1474]: time="2025-01-17T12:19:17.972817282Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:19:17.972824 containerd[1474]: time="2025-01-17T12:19:17.972831499Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:19:17.972977 containerd[1474]: time="2025-01-17T12:19:17.972856395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.972977 containerd[1474]: time="2025-01-17T12:19:17.972881132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.972977 containerd[1474]: time="2025-01-17T12:19:17.972895459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.972977 containerd[1474]: time="2025-01-17T12:19:17.972908122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.972977 containerd[1474]: time="2025-01-17T12:19:17.972920375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.972977 containerd[1474]: time="2025-01-17T12:19:17.972934001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.972977 containerd[1474]: time="2025-01-17T12:19:17.972945863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.972977 containerd[1474]: time="2025-01-17T12:19:17.972960170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.972977 containerd[1474]: time="2025-01-17T12:19:17.972973555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.973529 containerd[1474]: time="2025-01-17T12:19:17.972988693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.973529 containerd[1474]: time="2025-01-17T12:19:17.973001087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.973529 containerd[1474]: time="2025-01-17T12:19:17.973012408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.973529 containerd[1474]: time="2025-01-17T12:19:17.973024220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.973529 containerd[1474]: time="2025-01-17T12:19:17.973040551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:19:17.973529 containerd[1474]: time="2025-01-17T12:19:17.973068363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.973529 containerd[1474]: time="2025-01-17T12:19:17.973081047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.973529 containerd[1474]: time="2025-01-17T12:19:17.973091887Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:19:17.973529 containerd[1474]: time="2025-01-17T12:19:17.973148733Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:19:17.973529 containerd[1474]: time="2025-01-17T12:19:17.973164102Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:19:17.973529 containerd[1474]: time="2025-01-17T12:19:17.973175754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:19:17.973529 containerd[1474]: time="2025-01-17T12:19:17.973187286Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:19:17.973529 containerd[1474]: time="2025-01-17T12:19:17.973196734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.973870 containerd[1474]: time="2025-01-17T12:19:17.973211611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:19:17.973870 containerd[1474]: time="2025-01-17T12:19:17.973223764Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:19:17.973870 containerd[1474]: time="2025-01-17T12:19:17.973233723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:19:17.973953 containerd[1474]: time="2025-01-17T12:19:17.973534206Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:19:17.973953 containerd[1474]: time="2025-01-17T12:19:17.973594419Z" level=info msg="Connect containerd service" Jan 17 12:19:17.973953 containerd[1474]: time="2025-01-17T12:19:17.973629345Z" level=info msg="using legacy CRI server" Jan 17 12:19:17.973953 containerd[1474]: time="2025-01-17T12:19:17.973636308Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:19:17.976047 containerd[1474]: time="2025-01-17T12:19:17.974572594Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:19:17.976047 containerd[1474]: time="2025-01-17T12:19:17.975824822Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:19:17.976477 containerd[1474]: time="2025-01-17T12:19:17.976341631Z" level=info msg="Start subscribing containerd event" Jan 17 12:19:17.976652 containerd[1474]: time="2025-01-17T12:19:17.976579998Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:19:17.976652 containerd[1474]: time="2025-01-17T12:19:17.976607500Z" level=info msg="Start recovering state" Jan 17 12:19:17.976849 containerd[1474]: time="2025-01-17T12:19:17.976673534Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:19:17.976849 containerd[1474]: time="2025-01-17T12:19:17.976802536Z" level=info msg="Start event monitor" Jan 17 12:19:17.976910 containerd[1474]: time="2025-01-17T12:19:17.976875983Z" level=info msg="Start snapshots syncer" Jan 17 12:19:17.976910 containerd[1474]: time="2025-01-17T12:19:17.976893877Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:19:17.976985 containerd[1474]: time="2025-01-17T12:19:17.976909857Z" level=info msg="Start streaming server" Jan 17 12:19:17.977143 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:19:17.978523 containerd[1474]: time="2025-01-17T12:19:17.978492755Z" level=info msg="containerd successfully booted in 0.045558s" Jan 17 12:19:18.123737 tar[1458]: linux-amd64/LICENSE Jan 17 12:19:18.123737 tar[1458]: linux-amd64/README.md Jan 17 12:19:18.143566 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:19:18.778084 systemd-networkd[1402]: eth0: Gained IPv6LL Jan 17 12:19:18.781871 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:19:18.783730 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:19:18.793056 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:19:18.795670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:18.798215 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:19:18.819255 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:19:18.819585 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:19:18.821594 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:19:18.827627 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:19:20.109497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:20.111717 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:19:20.113298 systemd[1]: Startup finished in 1.116s (kernel) + 6.149s (initrd) + 5.321s (userspace) = 12.586s. Jan 17 12:19:20.115920 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:19:20.978485 kubelet[1556]: E0117 12:19:20.978367 1556 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:19:20.983733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:19:20.983993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:19:20.984385 systemd[1]: kubelet.service: Consumed 2.118s CPU time. Jan 17 12:19:22.432647 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:19:22.444072 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:60748.service - OpenSSH per-connection server daemon (10.0.0.1:60748). Jan 17 12:19:22.488343 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 60748 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:19:22.490899 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:22.499541 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:19:22.514104 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:19:22.516104 systemd-logind[1451]: New session 1 of user core. Jan 17 12:19:22.528219 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:19:22.541113 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:19:22.544491 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:19:22.673584 systemd[1574]: Queued start job for default target default.target. Jan 17 12:19:22.683338 systemd[1574]: Created slice app.slice - User Application Slice. Jan 17 12:19:22.683371 systemd[1574]: Reached target paths.target - Paths. Jan 17 12:19:22.683389 systemd[1574]: Reached target timers.target - Timers. Jan 17 12:19:22.685317 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:19:22.698615 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:19:22.698819 systemd[1574]: Reached target sockets.target - Sockets. Jan 17 12:19:22.698844 systemd[1574]: Reached target basic.target - Basic System. Jan 17 12:19:22.698908 systemd[1574]: Reached target default.target - Main User Target. Jan 17 12:19:22.698967 systemd[1574]: Startup finished in 145ms. Jan 17 12:19:22.699419 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:19:22.701434 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:19:22.762553 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:60762.service - OpenSSH per-connection server daemon (10.0.0.1:60762). Jan 17 12:19:22.805267 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 60762 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:19:22.807029 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:22.811886 systemd-logind[1451]: New session 2 of user core. Jan 17 12:19:22.823104 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:19:22.879060 sshd[1585]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:22.887729 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:60762.service: Deactivated successfully. Jan 17 12:19:22.890762 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:19:22.892605 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:19:22.910495 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:60770.service - OpenSSH per-connection server daemon (10.0.0.1:60770). Jan 17 12:19:22.911928 systemd-logind[1451]: Removed session 2. Jan 17 12:19:22.943013 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 60770 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:19:22.944564 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:22.949363 systemd-logind[1451]: New session 3 of user core. Jan 17 12:19:22.958939 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:19:23.012207 sshd[1592]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:23.025012 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:60770.service: Deactivated successfully. Jan 17 12:19:23.027104 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:19:23.028533 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:19:23.029935 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:60778.service - OpenSSH per-connection server daemon (10.0.0.1:60778). Jan 17 12:19:23.030804 systemd-logind[1451]: Removed session 3. Jan 17 12:19:23.070230 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 60778 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:19:23.072061 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:23.076703 systemd-logind[1451]: New session 4 of user core. Jan 17 12:19:23.088950 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:19:23.147457 sshd[1599]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:23.159071 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:60778.service: Deactivated successfully. Jan 17 12:19:23.161488 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:19:23.163402 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:19:23.173059 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:60790.service - OpenSSH per-connection server daemon (10.0.0.1:60790). Jan 17 12:19:23.174501 systemd-logind[1451]: Removed session 4. Jan 17 12:19:23.209272 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 60790 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:19:23.211383 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:23.215616 systemd-logind[1451]: New session 5 of user core. Jan 17 12:19:23.229947 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:19:23.291139 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:19:23.291494 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:23.722308 (dockerd)[1627]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:19:23.722315 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:19:24.036935 dockerd[1627]: time="2025-01-17T12:19:24.036753537Z" level=info msg="Starting up" Jan 17 12:19:24.584461 dockerd[1627]: time="2025-01-17T12:19:24.584388057Z" level=info msg="Loading containers: start." Jan 17 12:19:24.742939 kernel: Initializing XFRM netlink socket Jan 17 12:19:24.836674 systemd-networkd[1402]: docker0: Link UP Jan 17 12:19:24.861402 dockerd[1627]: time="2025-01-17T12:19:24.861358075Z" level=info msg="Loading containers: done." Jan 17 12:19:24.877105 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1185066334-merged.mount: Deactivated successfully. Jan 17 12:19:24.880352 dockerd[1627]: time="2025-01-17T12:19:24.880281693Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:19:24.880475 dockerd[1627]: time="2025-01-17T12:19:24.880434410Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:19:24.880628 dockerd[1627]: time="2025-01-17T12:19:24.880595592Z" level=info msg="Daemon has completed initialization" Jan 17 12:19:24.925085 dockerd[1627]: time="2025-01-17T12:19:24.924993524Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:19:24.925278 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:19:25.911117 containerd[1474]: time="2025-01-17T12:19:25.911042077Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:19:26.752335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1699065861.mount: Deactivated successfully. Jan 17 12:19:28.653368 containerd[1474]: time="2025-01-17T12:19:28.653287436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:28.666344 containerd[1474]: time="2025-01-17T12:19:28.666275453Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140730" Jan 17 12:19:28.700489 containerd[1474]: time="2025-01-17T12:19:28.700437709Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:28.728018 containerd[1474]: time="2025-01-17T12:19:28.727883500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:28.729679 containerd[1474]: time="2025-01-17T12:19:28.729611160Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 2.818486608s" Jan 17 12:19:28.729679 containerd[1474]: time="2025-01-17T12:19:28.729683476Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:19:28.758520 containerd[1474]: time="2025-01-17T12:19:28.758475922Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:19:31.234562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:19:31.294116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:31.483312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:31.513008 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:19:32.270405 kubelet[1852]: E0117 12:19:32.270194 1852 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:19:32.279090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:19:32.279323 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:19:32.363728 containerd[1474]: time="2025-01-17T12:19:32.363620194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:32.364769 containerd[1474]: time="2025-01-17T12:19:32.364672347Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216641" Jan 17 12:19:32.366360 containerd[1474]: time="2025-01-17T12:19:32.366313776Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:32.370370 containerd[1474]: time="2025-01-17T12:19:32.370319939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:32.372214 containerd[1474]: time="2025-01-17T12:19:32.372136024Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 3.613610961s" Jan 17 12:19:32.372214 containerd[1474]: time="2025-01-17T12:19:32.372214351Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:19:32.405282 containerd[1474]: time="2025-01-17T12:19:32.405230458Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:19:33.991806 containerd[1474]: time="2025-01-17T12:19:33.991710495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:34.027286 containerd[1474]: time="2025-01-17T12:19:34.027163762Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332841" Jan 17 12:19:34.030536 containerd[1474]: time="2025-01-17T12:19:34.030477877Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:34.034370 containerd[1474]: time="2025-01-17T12:19:34.034316206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:34.035550 containerd[1474]: time="2025-01-17T12:19:34.035507329Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.630226747s" Jan 17 12:19:34.035550 containerd[1474]: time="2025-01-17T12:19:34.035545080Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:19:34.063174 containerd[1474]: time="2025-01-17T12:19:34.063110826Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:19:34.997787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2722127450.mount: Deactivated successfully. Jan 17 12:19:35.683120 containerd[1474]: time="2025-01-17T12:19:35.683035292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:35.684110 containerd[1474]: time="2025-01-17T12:19:35.684006063Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 17 12:19:35.685637 containerd[1474]: time="2025-01-17T12:19:35.685588721Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:35.687796 containerd[1474]: time="2025-01-17T12:19:35.687736910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:35.688399 containerd[1474]: time="2025-01-17T12:19:35.688335141Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.62516799s" Jan 17 12:19:35.688399 containerd[1474]: time="2025-01-17T12:19:35.688388391Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:19:35.713814 containerd[1474]: time="2025-01-17T12:19:35.713670985Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:19:36.260096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328990021.mount: Deactivated successfully. Jan 17 12:19:38.256373 containerd[1474]: time="2025-01-17T12:19:38.256292508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.257981 containerd[1474]: time="2025-01-17T12:19:38.257894452Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:19:38.259808 containerd[1474]: time="2025-01-17T12:19:38.259733231Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.264138 containerd[1474]: time="2025-01-17T12:19:38.264054505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.265627 containerd[1474]: time="2025-01-17T12:19:38.265537446Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.551812038s" Jan 17 12:19:38.265627 containerd[1474]: time="2025-01-17T12:19:38.265614340Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:19:38.291653 containerd[1474]: time="2025-01-17T12:19:38.291591727Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:19:38.778710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218683433.mount: Deactivated successfully. Jan 17 12:19:38.785399 containerd[1474]: time="2025-01-17T12:19:38.785312379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.786209 containerd[1474]: time="2025-01-17T12:19:38.786153457Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 17 12:19:38.787508 containerd[1474]: time="2025-01-17T12:19:38.787461630Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.790675 containerd[1474]: time="2025-01-17T12:19:38.790628809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.791795 containerd[1474]: time="2025-01-17T12:19:38.791726388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 500.081752ms" Jan 17 12:19:38.791795 containerd[1474]: time="2025-01-17T12:19:38.791771232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:19:38.816694 containerd[1474]: time="2025-01-17T12:19:38.816637575Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:19:41.425519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1741653371.mount: Deactivated successfully. Jan 17 12:19:42.529530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:19:42.538972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:42.896803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:42.902757 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:19:43.106077 kubelet[2003]: E0117 12:19:43.105985 2003 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:19:43.111540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:19:43.111770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:19:44.640296 containerd[1474]: time="2025-01-17T12:19:44.640215292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:44.641687 containerd[1474]: time="2025-01-17T12:19:44.641622060Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 17 12:19:44.643206 containerd[1474]: time="2025-01-17T12:19:44.643163211Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:44.646570 containerd[1474]: time="2025-01-17T12:19:44.646519435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:44.647766 containerd[1474]: time="2025-01-17T12:19:44.647702964Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.830808848s" Jan 17 12:19:44.647766 containerd[1474]: time="2025-01-17T12:19:44.647746897Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:19:46.912481 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:46.921995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:46.942473 systemd[1]: Reloading requested from client PID 2097 ('systemctl') (unit session-5.scope)... Jan 17 12:19:46.942494 systemd[1]: Reloading... Jan 17 12:19:47.028811 zram_generator::config[2137]: No configuration found. Jan 17 12:19:47.527760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:47.622337 systemd[1]: Reloading finished in 679 ms. Jan 17 12:19:47.681982 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:47.687659 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:19:47.687976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:47.707146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:47.864053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:47.870198 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:19:47.915726 kubelet[2187]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:47.915726 kubelet[2187]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:19:47.915726 kubelet[2187]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:47.916202 kubelet[2187]: I0117 12:19:47.915819 2187 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:19:48.245976 kubelet[2187]: I0117 12:19:48.245634 2187 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:19:48.245976 kubelet[2187]: I0117 12:19:48.245679 2187 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:19:48.247724 kubelet[2187]: I0117 12:19:48.246623 2187 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:19:48.270364 kubelet[2187]: E0117 12:19:48.270290 2187 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:48.271265 kubelet[2187]: I0117 12:19:48.271192 2187 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:19:48.285773 kubelet[2187]: I0117 12:19:48.285681 2187 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:19:48.287014 kubelet[2187]: I0117 12:19:48.286969 2187 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:19:48.287200 kubelet[2187]: I0117 12:19:48.287168 2187 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:19:48.287683 kubelet[2187]: I0117 12:19:48.287642 2187 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:19:48.287683 kubelet[2187]: I0117 12:19:48.287668 2187 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:19:48.287903 kubelet[2187]: I0117 12:19:48.287871 2187 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:48.288050 kubelet[2187]: I0117 12:19:48.288010 2187 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:19:48.288133 kubelet[2187]: I0117 12:19:48.288069 2187 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:19:48.288133 kubelet[2187]: I0117 12:19:48.288114 2187 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:19:48.288244 kubelet[2187]: I0117 12:19:48.288144 2187 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:19:48.289863 kubelet[2187]: I0117 12:19:48.289834 2187 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:19:48.290115 kubelet[2187]: W0117 12:19:48.290068 2187 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:48.290156 kubelet[2187]: E0117 12:19:48.290119 2187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:48.290248 kubelet[2187]: W0117 12:19:48.290192 2187 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:48.290248 kubelet[2187]: E0117 12:19:48.290245 2187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:48.293692 kubelet[2187]: I0117 12:19:48.293429 2187 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:19:48.295952 kubelet[2187]: W0117 12:19:48.294754 2187 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:19:48.295952 kubelet[2187]: I0117 12:19:48.295558 2187 server.go:1256] "Started kubelet" Jan 17 12:19:48.296028 kubelet[2187]: I0117 12:19:48.295942 2187 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:19:48.297703 kubelet[2187]: I0117 12:19:48.297675 2187 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:19:48.298237 kubelet[2187]: I0117 12:19:48.298204 2187 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:19:48.299611 kubelet[2187]: I0117 12:19:48.298734 2187 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:19:48.299611 kubelet[2187]: I0117 12:19:48.298863 2187 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:19:48.299611 kubelet[2187]: I0117 12:19:48.299067 2187 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:19:48.300722 kubelet[2187]: W0117 12:19:48.300645 2187 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:48.300722 kubelet[2187]: E0117 12:19:48.300720 2187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:48.300941 kubelet[2187]: E0117 12:19:48.300916 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" Jan 17 12:19:48.304869 kubelet[2187]: I0117 12:19:48.304842 2187 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:19:48.305433 kubelet[2187]: I0117 12:19:48.304996 2187 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:19:48.305433 kubelet[2187]: I0117 12:19:48.305158 2187 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:19:48.307009 kubelet[2187]: E0117 12:19:48.306985 2187 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b7a26c9dd4a68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:19:48.295527016 +0000 UTC m=+0.420423164,LastTimestamp:2025-01-17 12:19:48.295527016 +0000 UTC m=+0.420423164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:19:48.307298 kubelet[2187]: I0117 12:19:48.307271 2187 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:19:48.307298 kubelet[2187]: I0117 12:19:48.307293 2187 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:19:48.309633 kubelet[2187]: E0117 12:19:48.309604 2187 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:19:48.324720 kubelet[2187]: I0117 12:19:48.324679 2187 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:19:48.324720 kubelet[2187]: I0117 12:19:48.324711 2187 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:19:48.324902 kubelet[2187]: I0117 12:19:48.324742 2187 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:48.325686 kubelet[2187]: I0117 12:19:48.325656 2187 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:19:48.327109 kubelet[2187]: I0117 12:19:48.327073 2187 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:19:48.327162 kubelet[2187]: I0117 12:19:48.327133 2187 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:19:48.327162 kubelet[2187]: I0117 12:19:48.327157 2187 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:19:48.327255 kubelet[2187]: E0117 12:19:48.327230 2187 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:19:48.328504 kubelet[2187]: W0117 12:19:48.327864 2187 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:48.328504 kubelet[2187]: E0117 12:19:48.327932 2187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:48.400455 kubelet[2187]: I0117 12:19:48.400386 2187 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:19:48.401095 kubelet[2187]: E0117 12:19:48.401036 2187 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jan 17 12:19:48.428182 kubelet[2187]: E0117 12:19:48.428144 2187 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:19:48.502399 kubelet[2187]: E0117 12:19:48.502262 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" Jan 17 12:19:48.578699 kubelet[2187]: I0117 12:19:48.578614 2187 policy_none.go:49] "None policy: Start" Jan 17 12:19:48.579844 kubelet[2187]: I0117 12:19:48.579820 2187 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:19:48.579844 kubelet[2187]: I0117 12:19:48.579852 2187 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:19:48.588540 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:19:48.602535 kubelet[2187]: I0117 12:19:48.602491 2187 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:19:48.603043 kubelet[2187]: E0117 12:19:48.603000 2187 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jan 17 12:19:48.604939 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:19:48.608169 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:19:48.619046 kubelet[2187]: I0117 12:19:48.618991 2187 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:19:48.619502 kubelet[2187]: I0117 12:19:48.619410 2187 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:19:48.620841 kubelet[2187]: E0117 12:19:48.620816 2187 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 12:19:48.629094 kubelet[2187]: I0117 12:19:48.629040 2187 topology_manager.go:215] "Topology Admit Handler" podUID="ef144d1b5a7ba9ebaa862ab9f6e1d0b4" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:19:48.630951 kubelet[2187]: I0117 12:19:48.630918 2187 topology_manager.go:215] "Topology Admit Handler" podUID="dd466de870bdf0e573d7965dbd759acf" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:19:48.632570 kubelet[2187]: I0117 12:19:48.632529 2187 topology_manager.go:215] "Topology Admit Handler" podUID="605dd245551545e29d4e79fb03fd341e" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:19:48.640090 systemd[1]: Created slice kubepods-burstable-podef144d1b5a7ba9ebaa862ab9f6e1d0b4.slice - libcontainer container kubepods-burstable-podef144d1b5a7ba9ebaa862ab9f6e1d0b4.slice. Jan 17 12:19:48.663095 systemd[1]: Created slice kubepods-burstable-poddd466de870bdf0e573d7965dbd759acf.slice - libcontainer container kubepods-burstable-poddd466de870bdf0e573d7965dbd759acf.slice. Jan 17 12:19:48.678126 systemd[1]: Created slice kubepods-burstable-pod605dd245551545e29d4e79fb03fd341e.slice - libcontainer container kubepods-burstable-pod605dd245551545e29d4e79fb03fd341e.slice. Jan 17 12:19:48.800493 kubelet[2187]: I0117 12:19:48.800439 2187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef144d1b5a7ba9ebaa862ab9f6e1d0b4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef144d1b5a7ba9ebaa862ab9f6e1d0b4\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:19:48.800493 kubelet[2187]: I0117 12:19:48.800511 2187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:48.800683 kubelet[2187]: I0117 12:19:48.800552 2187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:48.800751 kubelet[2187]: I0117 12:19:48.800671 2187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:48.800811 kubelet[2187]: I0117 12:19:48.800762 2187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef144d1b5a7ba9ebaa862ab9f6e1d0b4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef144d1b5a7ba9ebaa862ab9f6e1d0b4\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:19:48.800843 kubelet[2187]: I0117 12:19:48.800827 2187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef144d1b5a7ba9ebaa862ab9f6e1d0b4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ef144d1b5a7ba9ebaa862ab9f6e1d0b4\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:19:48.800881 kubelet[2187]: I0117 12:19:48.800869 2187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:48.800915 kubelet[2187]: I0117 12:19:48.800904 2187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:48.800951 kubelet[2187]: I0117 12:19:48.800935 2187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605dd245551545e29d4e79fb03fd341e-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"605dd245551545e29d4e79fb03fd341e\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:19:48.902961 kubelet[2187]: E0117 12:19:48.902898 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" Jan 17 12:19:48.961414 kubelet[2187]: E0117 12:19:48.961331 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:48.962284 containerd[1474]: time="2025-01-17T12:19:48.962228790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ef144d1b5a7ba9ebaa862ab9f6e1d0b4,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:48.976618 kubelet[2187]: E0117 12:19:48.976551 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:48.977264 containerd[1474]: time="2025-01-17T12:19:48.977210095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd466de870bdf0e573d7965dbd759acf,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:48.981488 kubelet[2187]: E0117 12:19:48.981449 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:48.982061 containerd[1474]: time="2025-01-17T12:19:48.981994458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:605dd245551545e29d4e79fb03fd341e,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:49.005113 kubelet[2187]: I0117 12:19:49.005071 2187 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:19:49.005626 kubelet[2187]: E0117 12:19:49.005575 2187 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jan 17 12:19:49.117895 kubelet[2187]: W0117 12:19:49.117758 2187 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:49.117895 kubelet[2187]: E0117 12:19:49.117826 2187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:49.196439 kubelet[2187]: W0117 12:19:49.196380 2187 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:49.196439 kubelet[2187]: E0117 12:19:49.196442 2187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:49.491017 kubelet[2187]: W0117 12:19:49.490853 2187 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:49.491017 kubelet[2187]: E0117 12:19:49.490932 2187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:49.604117 kubelet[2187]: W0117 12:19:49.604048 2187 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:49.604117 kubelet[2187]: E0117 12:19:49.604107 2187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jan 17 12:19:49.686147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527070565.mount: Deactivated successfully. Jan 17 12:19:49.694969 containerd[1474]: time="2025-01-17T12:19:49.694891114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:49.696885 containerd[1474]: time="2025-01-17T12:19:49.696840930Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:49.699513 containerd[1474]: time="2025-01-17T12:19:49.699450734Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:19:49.701284 containerd[1474]: time="2025-01-17T12:19:49.701230793Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:49.703646 kubelet[2187]: E0117 12:19:49.703602 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="1.6s" Jan 17 12:19:49.703745 containerd[1474]: time="2025-01-17T12:19:49.703570440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:19:49.706148 containerd[1474]: time="2025-01-17T12:19:49.706097700Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:19:49.709050 containerd[1474]: time="2025-01-17T12:19:49.708954097Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:49.712108 containerd[1474]: time="2025-01-17T12:19:49.712075230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 734.773523ms" Jan 17 12:19:49.713549 containerd[1474]: time="2025-01-17T12:19:49.713475376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:49.716706 containerd[1474]: time="2025-01-17T12:19:49.716630423Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 734.531098ms" Jan 17 12:19:49.722742 containerd[1474]: time="2025-01-17T12:19:49.722570172Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 760.245913ms" Jan 17 12:19:49.807389 kubelet[2187]: I0117 12:19:49.807348 2187 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:19:49.807774 kubelet[2187]: E0117 12:19:49.807692 2187 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jan 17 12:19:49.829657 containerd[1474]: time="2025-01-17T12:19:49.829316990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:49.829657 containerd[1474]: time="2025-01-17T12:19:49.829376682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:49.829657 containerd[1474]: time="2025-01-17T12:19:49.829387322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:49.829657 containerd[1474]: time="2025-01-17T12:19:49.829532134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:49.832804 containerd[1474]: time="2025-01-17T12:19:49.832483518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:49.832804 containerd[1474]: time="2025-01-17T12:19:49.832601800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:49.832804 containerd[1474]: time="2025-01-17T12:19:49.832613923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:49.832804 containerd[1474]: time="2025-01-17T12:19:49.832700265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:49.834759 containerd[1474]: time="2025-01-17T12:19:49.834557798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:49.835616 containerd[1474]: time="2025-01-17T12:19:49.835406620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:49.835616 containerd[1474]: time="2025-01-17T12:19:49.835431908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:49.835616 containerd[1474]: time="2025-01-17T12:19:49.835527707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:49.858053 systemd[1]: Started cri-containerd-58e4a8ca31bd19a25c45b1ff6c1d140e7fa8738f452dbe71def033727def13bd.scope - libcontainer container 58e4a8ca31bd19a25c45b1ff6c1d140e7fa8738f452dbe71def033727def13bd. Jan 17 12:19:49.862753 systemd[1]: Started cri-containerd-34681d897f605d409ad5c4aeb416814710a9980c37f1ef18c486e5b143c1e1fb.scope - libcontainer container 34681d897f605d409ad5c4aeb416814710a9980c37f1ef18c486e5b143c1e1fb. Jan 17 12:19:49.864984 systemd[1]: Started cri-containerd-c6207582c0f61b49c59d410e55fa8ad475169da6e06678e2c8fdd2b279d51093.scope - libcontainer container c6207582c0f61b49c59d410e55fa8ad475169da6e06678e2c8fdd2b279d51093. Jan 17 12:19:49.899089 containerd[1474]: time="2025-01-17T12:19:49.899026380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd466de870bdf0e573d7965dbd759acf,Namespace:kube-system,Attempt:0,} returns sandbox id \"58e4a8ca31bd19a25c45b1ff6c1d140e7fa8738f452dbe71def033727def13bd\"" Jan 17 12:19:49.900708 kubelet[2187]: E0117 12:19:49.900675 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:49.905578 containerd[1474]: time="2025-01-17T12:19:49.905536860Z" level=info msg="CreateContainer within sandbox \"58e4a8ca31bd19a25c45b1ff6c1d140e7fa8738f452dbe71def033727def13bd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:19:49.912820 containerd[1474]: time="2025-01-17T12:19:49.911821786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:605dd245551545e29d4e79fb03fd341e,Namespace:kube-system,Attempt:0,} returns sandbox id \"34681d897f605d409ad5c4aeb416814710a9980c37f1ef18c486e5b143c1e1fb\"" Jan 17 12:19:49.912939 kubelet[2187]: E0117 12:19:49.912856 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:49.914884 containerd[1474]: time="2025-01-17T12:19:49.914752693Z" level=info msg="CreateContainer within sandbox \"34681d897f605d409ad5c4aeb416814710a9980c37f1ef18c486e5b143c1e1fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:19:49.916712 containerd[1474]: time="2025-01-17T12:19:49.916069893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ef144d1b5a7ba9ebaa862ab9f6e1d0b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6207582c0f61b49c59d410e55fa8ad475169da6e06678e2c8fdd2b279d51093\"" Jan 17 12:19:49.918640 kubelet[2187]: E0117 12:19:49.918609 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:49.921479 containerd[1474]: time="2025-01-17T12:19:49.921426378Z" level=info msg="CreateContainer within sandbox \"c6207582c0f61b49c59d410e55fa8ad475169da6e06678e2c8fdd2b279d51093\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:19:49.931463 containerd[1474]: time="2025-01-17T12:19:49.931404821Z" level=info msg="CreateContainer within sandbox \"58e4a8ca31bd19a25c45b1ff6c1d140e7fa8738f452dbe71def033727def13bd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"170586e92ac7d607a8b2c37487c06c96d8fb14f4868a988f2f81ca73206191ed\"" Jan 17 12:19:49.932068 containerd[1474]: time="2025-01-17T12:19:49.932036165Z" level=info msg="StartContainer for \"170586e92ac7d607a8b2c37487c06c96d8fb14f4868a988f2f81ca73206191ed\"" Jan 17 12:19:49.946158 containerd[1474]: time="2025-01-17T12:19:49.946103306Z" level=info msg="CreateContainer within sandbox \"34681d897f605d409ad5c4aeb416814710a9980c37f1ef18c486e5b143c1e1fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d43779277779452a15bc6f5748274daa1d84fc604f979dad8f9becdda0c0a93e\"" Jan 17 12:19:49.946643 containerd[1474]: time="2025-01-17T12:19:49.946610307Z" level=info msg="StartContainer for \"d43779277779452a15bc6f5748274daa1d84fc604f979dad8f9becdda0c0a93e\"" Jan 17 12:19:49.950083 containerd[1474]: time="2025-01-17T12:19:49.949937166Z" level=info msg="CreateContainer within sandbox \"c6207582c0f61b49c59d410e55fa8ad475169da6e06678e2c8fdd2b279d51093\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ca464607b7793f3cba5049a89771de6a8d9a6c8fa5aebc9e4fa638e3ef481dc6\"" Jan 17 12:19:49.950662 containerd[1474]: time="2025-01-17T12:19:49.950606331Z" level=info msg="StartContainer for \"ca464607b7793f3cba5049a89771de6a8d9a6c8fa5aebc9e4fa638e3ef481dc6\"" Jan 17 12:19:49.967267 systemd[1]: Started cri-containerd-170586e92ac7d607a8b2c37487c06c96d8fb14f4868a988f2f81ca73206191ed.scope - libcontainer container 170586e92ac7d607a8b2c37487c06c96d8fb14f4868a988f2f81ca73206191ed. Jan 17 12:19:49.984029 systemd[1]: Started cri-containerd-d43779277779452a15bc6f5748274daa1d84fc604f979dad8f9becdda0c0a93e.scope - libcontainer container d43779277779452a15bc6f5748274daa1d84fc604f979dad8f9becdda0c0a93e. Jan 17 12:19:49.987612 systemd[1]: Started cri-containerd-ca464607b7793f3cba5049a89771de6a8d9a6c8fa5aebc9e4fa638e3ef481dc6.scope - libcontainer container ca464607b7793f3cba5049a89771de6a8d9a6c8fa5aebc9e4fa638e3ef481dc6. Jan 17 12:19:50.026370 containerd[1474]: time="2025-01-17T12:19:50.025518482Z" level=info msg="StartContainer for \"170586e92ac7d607a8b2c37487c06c96d8fb14f4868a988f2f81ca73206191ed\" returns successfully" Jan 17 12:19:50.036081 containerd[1474]: time="2025-01-17T12:19:50.035628543Z" level=info msg="StartContainer for \"d43779277779452a15bc6f5748274daa1d84fc604f979dad8f9becdda0c0a93e\" returns successfully" Jan 17 12:19:50.042361 containerd[1474]: time="2025-01-17T12:19:50.042330246Z" level=info msg="StartContainer for \"ca464607b7793f3cba5049a89771de6a8d9a6c8fa5aebc9e4fa638e3ef481dc6\" returns successfully" Jan 17 12:19:50.338802 kubelet[2187]: E0117 12:19:50.338736 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:50.344410 kubelet[2187]: E0117 12:19:50.344372 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:50.345221 kubelet[2187]: E0117 12:19:50.345191 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:51.307936 kubelet[2187]: E0117 12:19:51.307864 2187 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 12:19:51.345958 kubelet[2187]: E0117 12:19:51.345925 2187 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 12:19:51.346435 kubelet[2187]: E0117 12:19:51.346422 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:51.346979 kubelet[2187]: E0117 12:19:51.346962 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:51.409343 kubelet[2187]: I0117 12:19:51.409290 2187 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:19:51.416853 kubelet[2187]: I0117 12:19:51.416822 2187 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:19:51.423493 kubelet[2187]: E0117 12:19:51.423453 2187 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:19:51.524035 kubelet[2187]: E0117 12:19:51.523974 2187 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:19:51.624950 kubelet[2187]: E0117 12:19:51.624745 2187 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:19:51.725462 kubelet[2187]: E0117 12:19:51.725370 2187 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:19:51.825859 kubelet[2187]: E0117 12:19:51.825750 2187 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:19:51.926378 kubelet[2187]: E0117 12:19:51.926191 2187 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:19:52.026864 kubelet[2187]: E0117 12:19:52.026807 2187 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:19:52.089050 kubelet[2187]: E0117 12:19:52.088994 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:52.127071 kubelet[2187]: E0117 12:19:52.126990 2187 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:19:52.227963 kubelet[2187]: E0117 12:19:52.227766 2187 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:19:52.328755 kubelet[2187]: E0117 12:19:52.328687 2187 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:19:52.429058 kubelet[2187]: E0117 12:19:52.428993 2187 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:19:53.293674 kubelet[2187]: I0117 12:19:53.293587 2187 apiserver.go:52] "Watching apiserver" Jan 17 12:19:53.299389 kubelet[2187]: I0117 12:19:53.299346 2187 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:19:54.097403 systemd[1]: Reloading requested from client PID 2463 ('systemctl') (unit session-5.scope)... Jan 17 12:19:54.097426 systemd[1]: Reloading... Jan 17 12:19:54.197946 zram_generator::config[2508]: No configuration found. Jan 17 12:19:54.293368 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:54.394887 systemd[1]: Reloading finished in 296 ms. Jan 17 12:19:54.446240 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:54.458733 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:19:54.459131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:54.473104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:54.638227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:54.645003 (kubelet)[2547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:19:54.702516 kubelet[2547]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:54.702516 kubelet[2547]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:19:54.702516 kubelet[2547]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:54.703059 kubelet[2547]: I0117 12:19:54.702450 2547 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:19:54.708041 kubelet[2547]: I0117 12:19:54.707990 2547 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:19:54.708041 kubelet[2547]: I0117 12:19:54.708030 2547 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:19:54.708398 kubelet[2547]: I0117 12:19:54.708370 2547 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:19:54.710190 kubelet[2547]: I0117 12:19:54.710123 2547 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:19:54.712971 kubelet[2547]: I0117 12:19:54.712578 2547 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:19:54.722958 kubelet[2547]: I0117 12:19:54.722912 2547 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:19:54.723234 kubelet[2547]: I0117 12:19:54.723206 2547 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:19:54.723415 kubelet[2547]: I0117 12:19:54.723386 2547 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:19:54.723488 kubelet[2547]: I0117 12:19:54.723418 2547 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:19:54.723488 kubelet[2547]: I0117 12:19:54.723428 2547 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:19:54.723488 kubelet[2547]: I0117 12:19:54.723462 2547 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:54.723586 kubelet[2547]: I0117 12:19:54.723567 2547 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:19:54.723619 kubelet[2547]: I0117 12:19:54.723588 2547 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:19:54.723640 kubelet[2547]: I0117 12:19:54.723623 2547 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:19:54.723673 kubelet[2547]: I0117 12:19:54.723642 2547 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:19:54.725440 kubelet[2547]: I0117 12:19:54.725422 2547 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:19:54.726382 kubelet[2547]: I0117 12:19:54.725976 2547 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:19:54.728065 kubelet[2547]: I0117 12:19:54.726613 2547 server.go:1256] "Started kubelet" Jan 17 12:19:54.728530 kubelet[2547]: I0117 12:19:54.728500 2547 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:19:54.729307 kubelet[2547]: I0117 12:19:54.729279 2547 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:19:54.729426 kubelet[2547]: I0117 12:19:54.729403 2547 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:19:54.729506 kubelet[2547]: I0117 12:19:54.729482 2547 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:19:54.731397 kubelet[2547]: I0117 12:19:54.731363 2547 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:19:54.737545 kubelet[2547]: E0117 12:19:54.737463 2547 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:19:54.737713 kubelet[2547]: I0117 12:19:54.737660 2547 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:19:54.738608 kubelet[2547]: I0117 12:19:54.737795 2547 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:19:54.738608 kubelet[2547]: I0117 12:19:54.737939 2547 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:19:54.738608 kubelet[2547]: I0117 12:19:54.738116 2547 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:19:54.738608 kubelet[2547]: I0117 12:19:54.738206 2547 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:19:54.745490 kubelet[2547]: I0117 12:19:54.745442 2547 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:19:54.753340 kubelet[2547]: I0117 12:19:54.753103 2547 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:19:54.758471 kubelet[2547]: I0117 12:19:54.757466 2547 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:19:54.758471 kubelet[2547]: I0117 12:19:54.757511 2547 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:19:54.758471 kubelet[2547]: I0117 12:19:54.757542 2547 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:19:54.758471 kubelet[2547]: E0117 12:19:54.757654 2547 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:19:54.789249 kubelet[2547]: I0117 12:19:54.789197 2547 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:19:54.789249 kubelet[2547]: I0117 12:19:54.789224 2547 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:19:54.789249 kubelet[2547]: I0117 12:19:54.789241 2547 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:54.789467 kubelet[2547]: I0117 12:19:54.789411 2547 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:19:54.789467 kubelet[2547]: I0117 12:19:54.789432 2547 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:19:54.789467 kubelet[2547]: I0117 12:19:54.789439 2547 policy_none.go:49] "None policy: Start" Jan 17 12:19:54.789975 kubelet[2547]: I0117 12:19:54.789955 2547 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:19:54.790023 kubelet[2547]: I0117 12:19:54.789980 2547 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:19:54.790131 kubelet[2547]: I0117 12:19:54.790110 2547 state_mem.go:75] "Updated machine memory state" Jan 17 12:19:54.794824 kubelet[2547]: I0117 12:19:54.794799 2547 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:19:54.795439 kubelet[2547]: I0117 12:19:54.795321 2547 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:19:54.858185 kubelet[2547]: I0117 12:19:54.858133 2547 topology_manager.go:215] "Topology Admit Handler" podUID="ef144d1b5a7ba9ebaa862ab9f6e1d0b4" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:19:54.858363 kubelet[2547]: I0117 12:19:54.858250 2547 topology_manager.go:215] "Topology Admit Handler" podUID="dd466de870bdf0e573d7965dbd759acf" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:19:54.858363 kubelet[2547]: I0117 12:19:54.858295 2547 topology_manager.go:215] "Topology Admit Handler" podUID="605dd245551545e29d4e79fb03fd341e" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:19:54.904149 kubelet[2547]: I0117 12:19:54.904106 2547 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:19:54.932716 kubelet[2547]: I0117 12:19:54.932656 2547 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 17 12:19:54.932920 kubelet[2547]: I0117 12:19:54.932851 2547 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:19:54.939400 kubelet[2547]: I0117 12:19:54.939351 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:54.939400 kubelet[2547]: I0117 12:19:54.939410 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:54.939624 kubelet[2547]: I0117 12:19:54.939443 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef144d1b5a7ba9ebaa862ab9f6e1d0b4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef144d1b5a7ba9ebaa862ab9f6e1d0b4\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:19:54.939624 kubelet[2547]: I0117 12:19:54.939472 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef144d1b5a7ba9ebaa862ab9f6e1d0b4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef144d1b5a7ba9ebaa862ab9f6e1d0b4\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:19:54.939624 kubelet[2547]: I0117 12:19:54.939501 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:54.939624 kubelet[2547]: I0117 12:19:54.939526 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:54.939624 kubelet[2547]: I0117 12:19:54.939553 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:19:54.939741 kubelet[2547]: I0117 12:19:54.939582 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605dd245551545e29d4e79fb03fd341e-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"605dd245551545e29d4e79fb03fd341e\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:19:54.939741 kubelet[2547]: I0117 12:19:54.939613 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef144d1b5a7ba9ebaa862ab9f6e1d0b4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ef144d1b5a7ba9ebaa862ab9f6e1d0b4\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:19:55.194143 kubelet[2547]: E0117 12:19:55.194101 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:55.201067 kubelet[2547]: E0117 12:19:55.201013 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:55.201489 kubelet[2547]: E0117 12:19:55.201464 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:55.574066 sudo[1609]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:55.576643 sshd[1606]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:55.582914 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:60790.service: Deactivated successfully. Jan 17 12:19:55.585195 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:19:55.585408 systemd[1]: session-5.scope: Consumed 4.177s CPU time, 192.8M memory peak, 0B memory swap peak. Jan 17 12:19:55.586005 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:19:55.587143 systemd-logind[1451]: Removed session 5. Jan 17 12:19:55.725502 kubelet[2547]: I0117 12:19:55.725428 2547 apiserver.go:52] "Watching apiserver" Jan 17 12:19:55.738897 kubelet[2547]: I0117 12:19:55.738861 2547 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:19:55.772536 kubelet[2547]: E0117 12:19:55.772492 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:55.772733 kubelet[2547]: E0117 12:19:55.772608 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:55.772840 kubelet[2547]: E0117 12:19:55.772815 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:55.795520 kubelet[2547]: I0117 12:19:55.795467 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.792508689 podStartE2EDuration="1.792508689s" podCreationTimestamp="2025-01-17 12:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:55.792479884 +0000 UTC m=+1.142415541" watchObservedRunningTime="2025-01-17 12:19:55.792508689 +0000 UTC m=+1.142444346" Jan 17 12:19:55.800344 kubelet[2547]: I0117 12:19:55.800288 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.800233374 podStartE2EDuration="1.800233374s" podCreationTimestamp="2025-01-17 12:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:55.800201854 +0000 UTC m=+1.150137511" watchObservedRunningTime="2025-01-17 12:19:55.800233374 +0000 UTC m=+1.150169031" Jan 17 12:19:55.808003 kubelet[2547]: I0117 12:19:55.807963 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.807911351 podStartE2EDuration="1.807911351s" podCreationTimestamp="2025-01-17 12:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:55.807739291 +0000 UTC m=+1.157674968" watchObservedRunningTime="2025-01-17 12:19:55.807911351 +0000 UTC m=+1.157847008" Jan 17 12:19:56.774352 kubelet[2547]: E0117 12:19:56.774301 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:19:59.758400 kubelet[2547]: E0117 12:19:59.758337 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:01.769926 kubelet[2547]: E0117 12:20:01.769745 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:01.781546 kubelet[2547]: E0117 12:20:01.781466 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:02.530104 update_engine[1454]: I20250117 12:20:02.529987 1454 update_attempter.cc:509] Updating boot flags... Jan 17 12:20:02.563927 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2623) Jan 17 12:20:02.607846 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2626) Jan 17 12:20:02.642833 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2626) Jan 17 12:20:03.395215 kubelet[2547]: E0117 12:20:03.395164 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:03.785124 kubelet[2547]: E0117 12:20:03.784980 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:06.176743 kubelet[2547]: I0117 12:20:06.176700 2547 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:20:06.177282 containerd[1474]: time="2025-01-17T12:20:06.177169674Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:20:06.177589 kubelet[2547]: I0117 12:20:06.177379 2547 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:20:07.171410 kubelet[2547]: I0117 12:20:07.171348 2547 topology_manager.go:215] "Topology Admit Handler" podUID="9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1" podNamespace="kube-flannel" podName="kube-flannel-ds-67bnz" Jan 17 12:20:07.178223 systemd[1]: Created slice kubepods-burstable-pod9fa2cd8c_2d95_4fa7_bed8_ef9f318358e1.slice - libcontainer container kubepods-burstable-pod9fa2cd8c_2d95_4fa7_bed8_ef9f318358e1.slice. Jan 17 12:20:07.190539 kubelet[2547]: I0117 12:20:07.190236 2547 topology_manager.go:215] "Topology Admit Handler" podUID="d62c5911-eae6-4eee-9954-3fcb302466b9" podNamespace="kube-system" podName="kube-proxy-jj5b4" Jan 17 12:20:07.199817 systemd[1]: Created slice kubepods-besteffort-podd62c5911_eae6_4eee_9954_3fcb302466b9.slice - libcontainer container kubepods-besteffort-podd62c5911_eae6_4eee_9954_3fcb302466b9.slice. Jan 17 12:20:07.212136 kubelet[2547]: I0117 12:20:07.212105 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1-cni-plugin\") pod \"kube-flannel-ds-67bnz\" (UID: \"9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1\") " pod="kube-flannel/kube-flannel-ds-67bnz" Jan 17 12:20:07.212282 kubelet[2547]: I0117 12:20:07.212145 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1-xtables-lock\") pod \"kube-flannel-ds-67bnz\" (UID: \"9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1\") " pod="kube-flannel/kube-flannel-ds-67bnz" Jan 17 12:20:07.212282 kubelet[2547]: I0117 12:20:07.212172 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqdjj\" (UniqueName: \"kubernetes.io/projected/d62c5911-eae6-4eee-9954-3fcb302466b9-kube-api-access-bqdjj\") pod \"kube-proxy-jj5b4\" (UID: \"d62c5911-eae6-4eee-9954-3fcb302466b9\") " pod="kube-system/kube-proxy-jj5b4" Jan 17 12:20:07.212282 kubelet[2547]: I0117 12:20:07.212198 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d62c5911-eae6-4eee-9954-3fcb302466b9-xtables-lock\") pod \"kube-proxy-jj5b4\" (UID: \"d62c5911-eae6-4eee-9954-3fcb302466b9\") " pod="kube-system/kube-proxy-jj5b4" Jan 17 12:20:07.212282 kubelet[2547]: I0117 12:20:07.212229 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d62c5911-eae6-4eee-9954-3fcb302466b9-lib-modules\") pod \"kube-proxy-jj5b4\" (UID: \"d62c5911-eae6-4eee-9954-3fcb302466b9\") " pod="kube-system/kube-proxy-jj5b4" Jan 17 12:20:07.212282 kubelet[2547]: I0117 12:20:07.212268 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1-run\") pod \"kube-flannel-ds-67bnz\" (UID: \"9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1\") " pod="kube-flannel/kube-flannel-ds-67bnz" Jan 17 12:20:07.212439 kubelet[2547]: I0117 12:20:07.212302 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1-cni\") pod \"kube-flannel-ds-67bnz\" (UID: \"9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1\") " pod="kube-flannel/kube-flannel-ds-67bnz" Jan 17 12:20:07.212439 kubelet[2547]: I0117 12:20:07.212326 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4qn9\" (UniqueName: \"kubernetes.io/projected/9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1-kube-api-access-j4qn9\") pod \"kube-flannel-ds-67bnz\" (UID: \"9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1\") " pod="kube-flannel/kube-flannel-ds-67bnz" Jan 17 12:20:07.212439 kubelet[2547]: I0117 12:20:07.212348 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d62c5911-eae6-4eee-9954-3fcb302466b9-kube-proxy\") pod \"kube-proxy-jj5b4\" (UID: \"d62c5911-eae6-4eee-9954-3fcb302466b9\") " pod="kube-system/kube-proxy-jj5b4" Jan 17 12:20:07.212565 kubelet[2547]: I0117 12:20:07.212521 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1-flannel-cfg\") pod \"kube-flannel-ds-67bnz\" (UID: \"9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1\") " pod="kube-flannel/kube-flannel-ds-67bnz" Jan 17 12:20:07.481381 kubelet[2547]: E0117 12:20:07.481187 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:07.482158 containerd[1474]: time="2025-01-17T12:20:07.482105081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-67bnz,Uid:9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1,Namespace:kube-flannel,Attempt:0,}" Jan 17 12:20:07.511324 kubelet[2547]: E0117 12:20:07.511277 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:07.511883 containerd[1474]: time="2025-01-17T12:20:07.511823946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jj5b4,Uid:d62c5911-eae6-4eee-9954-3fcb302466b9,Namespace:kube-system,Attempt:0,}" Jan 17 12:20:07.517414 containerd[1474]: time="2025-01-17T12:20:07.516938566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:07.517414 containerd[1474]: time="2025-01-17T12:20:07.517147691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:07.517414 containerd[1474]: time="2025-01-17T12:20:07.517168150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:07.517414 containerd[1474]: time="2025-01-17T12:20:07.517322201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:07.547065 systemd[1]: Started cri-containerd-93b43195cf629cb8b13543f7223e1e78157075d9d61f3d2a0d8ee7ea6b62ab84.scope - libcontainer container 93b43195cf629cb8b13543f7223e1e78157075d9d61f3d2a0d8ee7ea6b62ab84. Jan 17 12:20:07.552925 containerd[1474]: time="2025-01-17T12:20:07.550922423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:07.552925 containerd[1474]: time="2025-01-17T12:20:07.550977918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:07.552925 containerd[1474]: time="2025-01-17T12:20:07.551003085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:07.552925 containerd[1474]: time="2025-01-17T12:20:07.551082766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:07.577017 systemd[1]: Started cri-containerd-68adc320b827e0d803443a5a856381461fe204e18bae0fa842db39b84e1a1989.scope - libcontainer container 68adc320b827e0d803443a5a856381461fe204e18bae0fa842db39b84e1a1989. Jan 17 12:20:07.594197 containerd[1474]: time="2025-01-17T12:20:07.594153009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-67bnz,Uid:9fa2cd8c-2d95-4fa7-bed8-ef9f318358e1,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"93b43195cf629cb8b13543f7223e1e78157075d9d61f3d2a0d8ee7ea6b62ab84\"" Jan 17 12:20:07.595222 kubelet[2547]: E0117 12:20:07.595147 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:07.597013 containerd[1474]: time="2025-01-17T12:20:07.596951366Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 17 12:20:07.608858 containerd[1474]: time="2025-01-17T12:20:07.608808075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jj5b4,Uid:d62c5911-eae6-4eee-9954-3fcb302466b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"68adc320b827e0d803443a5a856381461fe204e18bae0fa842db39b84e1a1989\"" Jan 17 12:20:07.609554 kubelet[2547]: E0117 12:20:07.609509 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:07.612112 containerd[1474]: time="2025-01-17T12:20:07.612080669Z" level=info msg="CreateContainer within sandbox \"68adc320b827e0d803443a5a856381461fe204e18bae0fa842db39b84e1a1989\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:20:08.036496 containerd[1474]: time="2025-01-17T12:20:08.036407910Z" level=info msg="CreateContainer within sandbox \"68adc320b827e0d803443a5a856381461fe204e18bae0fa842db39b84e1a1989\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a507b67f12aca0e9a925addfd0e4da424a9a0c1dff5431567624b8c55c7a054a\"" Jan 17 12:20:08.037376 containerd[1474]: time="2025-01-17T12:20:08.037318312Z" level=info msg="StartContainer for \"a507b67f12aca0e9a925addfd0e4da424a9a0c1dff5431567624b8c55c7a054a\"" Jan 17 12:20:08.070135 systemd[1]: Started cri-containerd-a507b67f12aca0e9a925addfd0e4da424a9a0c1dff5431567624b8c55c7a054a.scope - libcontainer container a507b67f12aca0e9a925addfd0e4da424a9a0c1dff5431567624b8c55c7a054a. Jan 17 12:20:08.108175 containerd[1474]: time="2025-01-17T12:20:08.108118400Z" level=info msg="StartContainer for \"a507b67f12aca0e9a925addfd0e4da424a9a0c1dff5431567624b8c55c7a054a\" returns successfully" Jan 17 12:20:08.798389 kubelet[2547]: E0117 12:20:08.798340 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:08.806871 kubelet[2547]: I0117 12:20:08.806820 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jj5b4" podStartSLOduration=1.8067429339999999 podStartE2EDuration="1.806742934s" podCreationTimestamp="2025-01-17 12:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:20:08.806603109 +0000 UTC m=+14.156538796" watchObservedRunningTime="2025-01-17 12:20:08.806742934 +0000 UTC m=+14.156678591" Jan 17 12:20:09.286881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3587616754.mount: Deactivated successfully. Jan 17 12:20:09.435616 containerd[1474]: time="2025-01-17T12:20:09.435525705Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:09.436393 containerd[1474]: time="2025-01-17T12:20:09.436302033Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 17 12:20:09.437634 containerd[1474]: time="2025-01-17T12:20:09.437597563Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:09.439614 containerd[1474]: time="2025-01-17T12:20:09.439586092Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:09.440288 containerd[1474]: time="2025-01-17T12:20:09.440264324Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.843265849s" Jan 17 12:20:09.440331 containerd[1474]: time="2025-01-17T12:20:09.440294010Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 17 12:20:09.442157 containerd[1474]: time="2025-01-17T12:20:09.442066922Z" level=info msg="CreateContainer within sandbox \"93b43195cf629cb8b13543f7223e1e78157075d9d61f3d2a0d8ee7ea6b62ab84\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 17 12:20:09.455998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955482254.mount: Deactivated successfully. Jan 17 12:20:09.457644 containerd[1474]: time="2025-01-17T12:20:09.457576393Z" level=info msg="CreateContainer within sandbox \"93b43195cf629cb8b13543f7223e1e78157075d9d61f3d2a0d8ee7ea6b62ab84\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"2492edd08ef0aca926a60ae93fb428af354fab602a424c5654cdd7d878282033\"" Jan 17 12:20:09.458258 containerd[1474]: time="2025-01-17T12:20:09.458189561Z" level=info msg="StartContainer for \"2492edd08ef0aca926a60ae93fb428af354fab602a424c5654cdd7d878282033\"" Jan 17 12:20:09.494980 systemd[1]: Started cri-containerd-2492edd08ef0aca926a60ae93fb428af354fab602a424c5654cdd7d878282033.scope - libcontainer container 2492edd08ef0aca926a60ae93fb428af354fab602a424c5654cdd7d878282033. Jan 17 12:20:09.523364 systemd[1]: cri-containerd-2492edd08ef0aca926a60ae93fb428af354fab602a424c5654cdd7d878282033.scope: Deactivated successfully. Jan 17 12:20:09.524838 containerd[1474]: time="2025-01-17T12:20:09.524737159Z" level=info msg="StartContainer for \"2492edd08ef0aca926a60ae93fb428af354fab602a424c5654cdd7d878282033\" returns successfully" Jan 17 12:20:09.548381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2492edd08ef0aca926a60ae93fb428af354fab602a424c5654cdd7d878282033-rootfs.mount: Deactivated successfully. Jan 17 12:20:09.597693 containerd[1474]: time="2025-01-17T12:20:09.597615875Z" level=info msg="shim disconnected" id=2492edd08ef0aca926a60ae93fb428af354fab602a424c5654cdd7d878282033 namespace=k8s.io Jan 17 12:20:09.597951 containerd[1474]: time="2025-01-17T12:20:09.597703932Z" level=warning msg="cleaning up after shim disconnected" id=2492edd08ef0aca926a60ae93fb428af354fab602a424c5654cdd7d878282033 namespace=k8s.io Jan 17 12:20:09.597951 containerd[1474]: time="2025-01-17T12:20:09.597717407Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:09.764216 kubelet[2547]: E0117 12:20:09.764128 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:09.802658 kubelet[2547]: E0117 12:20:09.802494 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:09.802658 kubelet[2547]: E0117 12:20:09.802596 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:09.803273 kubelet[2547]: E0117 12:20:09.803251 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:09.803462 containerd[1474]: time="2025-01-17T12:20:09.803425175Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 17 12:20:11.706317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount496703841.mount: Deactivated successfully. Jan 17 12:20:12.291826 containerd[1474]: time="2025-01-17T12:20:12.291717544Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:12.292807 containerd[1474]: time="2025-01-17T12:20:12.292681313Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 17 12:20:12.294195 containerd[1474]: time="2025-01-17T12:20:12.294147802Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:12.298347 containerd[1474]: time="2025-01-17T12:20:12.298297345Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:12.301680 containerd[1474]: time="2025-01-17T12:20:12.301606111Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.49814021s" Jan 17 12:20:12.301680 containerd[1474]: time="2025-01-17T12:20:12.301645776Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 17 12:20:12.303791 containerd[1474]: time="2025-01-17T12:20:12.303733458Z" level=info msg="CreateContainer within sandbox \"93b43195cf629cb8b13543f7223e1e78157075d9d61f3d2a0d8ee7ea6b62ab84\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:20:12.471275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1379775021.mount: Deactivated successfully. Jan 17 12:20:12.475504 containerd[1474]: time="2025-01-17T12:20:12.475441752Z" level=info msg="CreateContainer within sandbox \"93b43195cf629cb8b13543f7223e1e78157075d9d61f3d2a0d8ee7ea6b62ab84\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7c8ef2b51914c9f6055727b75393ea2f686524aa4172d9ab23916a9f545ad05e\"" Jan 17 12:20:12.476273 containerd[1474]: time="2025-01-17T12:20:12.476236643Z" level=info msg="StartContainer for \"7c8ef2b51914c9f6055727b75393ea2f686524aa4172d9ab23916a9f545ad05e\"" Jan 17 12:20:12.530118 systemd[1]: Started cri-containerd-7c8ef2b51914c9f6055727b75393ea2f686524aa4172d9ab23916a9f545ad05e.scope - libcontainer container 7c8ef2b51914c9f6055727b75393ea2f686524aa4172d9ab23916a9f545ad05e. Jan 17 12:20:12.596564 systemd[1]: cri-containerd-7c8ef2b51914c9f6055727b75393ea2f686524aa4172d9ab23916a9f545ad05e.scope: Deactivated successfully. Jan 17 12:20:12.661918 kubelet[2547]: I0117 12:20:12.661868 2547 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:20:12.697102 containerd[1474]: time="2025-01-17T12:20:12.696710087Z" level=info msg="StartContainer for \"7c8ef2b51914c9f6055727b75393ea2f686524aa4172d9ab23916a9f545ad05e\" returns successfully" Jan 17 12:20:12.713335 kubelet[2547]: I0117 12:20:12.713080 2547 topology_manager.go:215] "Topology Admit Handler" podUID="ce5111d6-8c34-466f-8976-092ed14516a0" podNamespace="kube-system" podName="coredns-76f75df574-7xj6p" Jan 17 12:20:12.716885 kubelet[2547]: I0117 12:20:12.715363 2547 topology_manager.go:215] "Topology Admit Handler" podUID="d7783a4b-8366-4f69-be9d-8af15b9718d1" podNamespace="kube-system" podName="coredns-76f75df574-fsdgt" Jan 17 12:20:12.733000 systemd[1]: Created slice kubepods-burstable-podce5111d6_8c34_466f_8976_092ed14516a0.slice - libcontainer container kubepods-burstable-podce5111d6_8c34_466f_8976_092ed14516a0.slice. Jan 17 12:20:12.751697 systemd[1]: Created slice kubepods-burstable-podd7783a4b_8366_4f69_be9d_8af15b9718d1.slice - libcontainer container kubepods-burstable-podd7783a4b_8366_4f69_be9d_8af15b9718d1.slice. Jan 17 12:20:12.758285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c8ef2b51914c9f6055727b75393ea2f686524aa4172d9ab23916a9f545ad05e-rootfs.mount: Deactivated successfully. Jan 17 12:20:12.820207 kubelet[2547]: E0117 12:20:12.819449 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:12.848851 kubelet[2547]: I0117 12:20:12.846584 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddj8d\" (UniqueName: \"kubernetes.io/projected/ce5111d6-8c34-466f-8976-092ed14516a0-kube-api-access-ddj8d\") pod \"coredns-76f75df574-7xj6p\" (UID: \"ce5111d6-8c34-466f-8976-092ed14516a0\") " pod="kube-system/coredns-76f75df574-7xj6p" Jan 17 12:20:12.848851 kubelet[2547]: I0117 12:20:12.848320 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fg8s\" (UniqueName: \"kubernetes.io/projected/d7783a4b-8366-4f69-be9d-8af15b9718d1-kube-api-access-2fg8s\") pod \"coredns-76f75df574-fsdgt\" (UID: \"d7783a4b-8366-4f69-be9d-8af15b9718d1\") " pod="kube-system/coredns-76f75df574-fsdgt" Jan 17 12:20:12.848851 kubelet[2547]: I0117 12:20:12.848379 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce5111d6-8c34-466f-8976-092ed14516a0-config-volume\") pod \"coredns-76f75df574-7xj6p\" (UID: \"ce5111d6-8c34-466f-8976-092ed14516a0\") " pod="kube-system/coredns-76f75df574-7xj6p" Jan 17 12:20:12.848851 kubelet[2547]: I0117 12:20:12.848411 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7783a4b-8366-4f69-be9d-8af15b9718d1-config-volume\") pod \"coredns-76f75df574-fsdgt\" (UID: \"d7783a4b-8366-4f69-be9d-8af15b9718d1\") " pod="kube-system/coredns-76f75df574-fsdgt" Jan 17 12:20:12.914482 containerd[1474]: time="2025-01-17T12:20:12.914264903Z" level=info msg="shim disconnected" id=7c8ef2b51914c9f6055727b75393ea2f686524aa4172d9ab23916a9f545ad05e namespace=k8s.io Jan 17 12:20:12.914482 containerd[1474]: time="2025-01-17T12:20:12.914359131Z" level=warning msg="cleaning up after shim disconnected" id=7c8ef2b51914c9f6055727b75393ea2f686524aa4172d9ab23916a9f545ad05e namespace=k8s.io Jan 17 12:20:12.914482 containerd[1474]: time="2025-01-17T12:20:12.914375391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:13.045989 kubelet[2547]: E0117 12:20:13.045908 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:13.046868 containerd[1474]: time="2025-01-17T12:20:13.046767939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7xj6p,Uid:ce5111d6-8c34-466f-8976-092ed14516a0,Namespace:kube-system,Attempt:0,}" Jan 17 12:20:13.071761 kubelet[2547]: E0117 12:20:13.071678 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:13.072581 containerd[1474]: time="2025-01-17T12:20:13.072510807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fsdgt,Uid:d7783a4b-8366-4f69-be9d-8af15b9718d1,Namespace:kube-system,Attempt:0,}" Jan 17 12:20:13.127740 containerd[1474]: time="2025-01-17T12:20:13.127545379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7xj6p,Uid:ce5111d6-8c34-466f-8976-092ed14516a0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e8185430b7151906415d2cd437cd47ab51db366baf0462feaa27fbd81fedd5e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 12:20:13.128301 kubelet[2547]: E0117 12:20:13.128066 2547 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e8185430b7151906415d2cd437cd47ab51db366baf0462feaa27fbd81fedd5e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 12:20:13.128301 kubelet[2547]: E0117 12:20:13.128150 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e8185430b7151906415d2cd437cd47ab51db366baf0462feaa27fbd81fedd5e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-7xj6p" Jan 17 12:20:13.128301 kubelet[2547]: E0117 12:20:13.128195 2547 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e8185430b7151906415d2cd437cd47ab51db366baf0462feaa27fbd81fedd5e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-7xj6p" Jan 17 12:20:13.128301 kubelet[2547]: E0117 12:20:13.128289 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-7xj6p_kube-system(ce5111d6-8c34-466f-8976-092ed14516a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-7xj6p_kube-system(ce5111d6-8c34-466f-8976-092ed14516a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e8185430b7151906415d2cd437cd47ab51db366baf0462feaa27fbd81fedd5e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-7xj6p" podUID="ce5111d6-8c34-466f-8976-092ed14516a0" Jan 17 12:20:13.133354 containerd[1474]: time="2025-01-17T12:20:13.133297594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fsdgt,Uid:d7783a4b-8366-4f69-be9d-8af15b9718d1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c9a9de1efa0ff9023a357c15828e736a1273e632b4b75f98a2cb29609d666c3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 12:20:13.133571 kubelet[2547]: E0117 12:20:13.133535 2547 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c9a9de1efa0ff9023a357c15828e736a1273e632b4b75f98a2cb29609d666c3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 12:20:13.133639 kubelet[2547]: E0117 12:20:13.133594 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c9a9de1efa0ff9023a357c15828e736a1273e632b4b75f98a2cb29609d666c3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-fsdgt" Jan 17 12:20:13.133639 kubelet[2547]: E0117 12:20:13.133632 2547 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c9a9de1efa0ff9023a357c15828e736a1273e632b4b75f98a2cb29609d666c3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-fsdgt" Jan 17 12:20:13.133724 kubelet[2547]: E0117 12:20:13.133706 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-fsdgt_kube-system(d7783a4b-8366-4f69-be9d-8af15b9718d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-fsdgt_kube-system(d7783a4b-8366-4f69-be9d-8af15b9718d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c9a9de1efa0ff9023a357c15828e736a1273e632b4b75f98a2cb29609d666c3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-fsdgt" podUID="d7783a4b-8366-4f69-be9d-8af15b9718d1" Jan 17 12:20:13.606350 systemd[1]: run-netns-cni\x2d648b1e7f\x2d84b6\x2d66b2\x2d2da2\x2d929c20724434.mount: Deactivated successfully. Jan 17 12:20:13.606523 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e8185430b7151906415d2cd437cd47ab51db366baf0462feaa27fbd81fedd5e-shm.mount: Deactivated successfully. Jan 17 12:20:13.820983 kubelet[2547]: E0117 12:20:13.820937 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:13.824571 containerd[1474]: time="2025-01-17T12:20:13.824462552Z" level=info msg="CreateContainer within sandbox \"93b43195cf629cb8b13543f7223e1e78157075d9d61f3d2a0d8ee7ea6b62ab84\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 17 12:20:13.839536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1035712290.mount: Deactivated successfully. Jan 17 12:20:13.843683 containerd[1474]: time="2025-01-17T12:20:13.843621506Z" level=info msg="CreateContainer within sandbox \"93b43195cf629cb8b13543f7223e1e78157075d9d61f3d2a0d8ee7ea6b62ab84\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"7f5355e0577361fe35959f25d06b0b7979adafd9bf750b03b92bb4f881e9a035\"" Jan 17 12:20:13.844388 containerd[1474]: time="2025-01-17T12:20:13.844347897Z" level=info msg="StartContainer for \"7f5355e0577361fe35959f25d06b0b7979adafd9bf750b03b92bb4f881e9a035\"" Jan 17 12:20:13.887125 systemd[1]: Started cri-containerd-7f5355e0577361fe35959f25d06b0b7979adafd9bf750b03b92bb4f881e9a035.scope - libcontainer container 7f5355e0577361fe35959f25d06b0b7979adafd9bf750b03b92bb4f881e9a035. Jan 17 12:20:13.930337 containerd[1474]: time="2025-01-17T12:20:13.930256290Z" level=info msg="StartContainer for \"7f5355e0577361fe35959f25d06b0b7979adafd9bf750b03b92bb4f881e9a035\" returns successfully" Jan 17 12:20:14.825699 kubelet[2547]: E0117 12:20:14.825648 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:14.990558 systemd-networkd[1402]: flannel.1: Link UP Jan 17 12:20:14.990827 systemd-networkd[1402]: flannel.1: Gained carrier Jan 17 12:20:15.827674 kubelet[2547]: E0117 12:20:15.827609 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:16.889960 systemd-networkd[1402]: flannel.1: Gained IPv6LL Jan 17 12:20:23.524265 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:39426.service - OpenSSH per-connection server daemon (10.0.0.1:39426). Jan 17 12:20:23.582653 sshd[3221]: Accepted publickey for core from 10.0.0.1 port 39426 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:20:23.584567 sshd[3221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:23.588756 systemd-logind[1451]: New session 6 of user core. Jan 17 12:20:23.598906 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:20:23.776033 sshd[3221]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:23.780059 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:39426.service: Deactivated successfully. Jan 17 12:20:23.782365 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:20:23.783055 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:20:23.784060 systemd-logind[1451]: Removed session 6. Jan 17 12:20:24.758798 kubelet[2547]: E0117 12:20:24.758753 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:24.759599 containerd[1474]: time="2025-01-17T12:20:24.759552836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7xj6p,Uid:ce5111d6-8c34-466f-8976-092ed14516a0,Namespace:kube-system,Attempt:0,}" Jan 17 12:20:25.480729 systemd-networkd[1402]: cni0: Link UP Jan 17 12:20:25.480744 systemd-networkd[1402]: cni0: Gained carrier Jan 17 12:20:25.484471 systemd-networkd[1402]: cni0: Lost carrier Jan 17 12:20:25.490008 systemd-networkd[1402]: vethac061cb2: Link UP Jan 17 12:20:25.491949 kernel: cni0: port 1(vethac061cb2) entered blocking state Jan 17 12:20:25.492029 kernel: cni0: port 1(vethac061cb2) entered disabled state Jan 17 12:20:25.492061 kernel: vethac061cb2: entered allmulticast mode Jan 17 12:20:25.493257 kernel: vethac061cb2: entered promiscuous mode Jan 17 12:20:25.494266 kernel: cni0: port 1(vethac061cb2) entered blocking state Jan 17 12:20:25.494314 kernel: cni0: port 1(vethac061cb2) entered forwarding state Jan 17 12:20:25.495995 kernel: cni0: port 1(vethac061cb2) entered disabled state Jan 17 12:20:25.505212 kernel: cni0: port 1(vethac061cb2) entered blocking state Jan 17 12:20:25.505350 kernel: cni0: port 1(vethac061cb2) entered forwarding state Jan 17 12:20:25.505515 systemd-networkd[1402]: vethac061cb2: Gained carrier Jan 17 12:20:25.506418 systemd-networkd[1402]: cni0: Gained carrier Jan 17 12:20:25.510252 containerd[1474]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00011c8e8), "name":"cbr0", "type":"bridge"} Jan 17 12:20:25.510252 containerd[1474]: delegateAdd: netconf sent to delegate plugin: Jan 17 12:20:25.570139 containerd[1474]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-17T12:20:25.569963859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:25.570139 containerd[1474]: time="2025-01-17T12:20:25.570048579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:25.570139 containerd[1474]: time="2025-01-17T12:20:25.570062314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:25.570139 containerd[1474]: time="2025-01-17T12:20:25.570136624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:25.594921 systemd[1]: Started cri-containerd-d62911da358f6b9e909bd4b35157976c6052eda4f673e86ffcf9131a670dc1c4.scope - libcontainer container d62911da358f6b9e909bd4b35157976c6052eda4f673e86ffcf9131a670dc1c4. Jan 17 12:20:25.607588 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:20:25.634735 containerd[1474]: time="2025-01-17T12:20:25.634681898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7xj6p,Uid:ce5111d6-8c34-466f-8976-092ed14516a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d62911da358f6b9e909bd4b35157976c6052eda4f673e86ffcf9131a670dc1c4\"" Jan 17 12:20:25.635690 kubelet[2547]: E0117 12:20:25.635663 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:25.637857 containerd[1474]: time="2025-01-17T12:20:25.637793390Z" level=info msg="CreateContainer within sandbox \"d62911da358f6b9e909bd4b35157976c6052eda4f673e86ffcf9131a670dc1c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:20:25.958538 containerd[1474]: time="2025-01-17T12:20:25.958460142Z" level=info msg="CreateContainer within sandbox \"d62911da358f6b9e909bd4b35157976c6052eda4f673e86ffcf9131a670dc1c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ced4bf2cfabbfe0f4d0db792debf56ca4e958edfce6b8901fc20b77a5374a892\"" Jan 17 12:20:25.959351 containerd[1474]: time="2025-01-17T12:20:25.959271047Z" level=info msg="StartContainer for \"ced4bf2cfabbfe0f4d0db792debf56ca4e958edfce6b8901fc20b77a5374a892\"" Jan 17 12:20:25.994952 systemd[1]: Started cri-containerd-ced4bf2cfabbfe0f4d0db792debf56ca4e958edfce6b8901fc20b77a5374a892.scope - libcontainer container ced4bf2cfabbfe0f4d0db792debf56ca4e958edfce6b8901fc20b77a5374a892. Jan 17 12:20:26.109172 containerd[1474]: time="2025-01-17T12:20:26.109099639Z" level=info msg="StartContainer for \"ced4bf2cfabbfe0f4d0db792debf56ca4e958edfce6b8901fc20b77a5374a892\" returns successfully" Jan 17 12:20:26.253959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount637911721.mount: Deactivated successfully. Jan 17 12:20:26.852332 kubelet[2547]: E0117 12:20:26.852268 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:26.874050 systemd-networkd[1402]: cni0: Gained IPv6LL Jan 17 12:20:27.001275 kubelet[2547]: I0117 12:20:27.001214 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-67bnz" podStartSLOduration=15.295298108 podStartE2EDuration="20.001115111s" podCreationTimestamp="2025-01-17 12:20:07 +0000 UTC" firstStartedPulling="2025-01-17 12:20:07.596136113 +0000 UTC m=+12.946071770" lastFinishedPulling="2025-01-17 12:20:12.301953116 +0000 UTC m=+17.651888773" observedRunningTime="2025-01-17 12:20:14.844188724 +0000 UTC m=+20.194124381" watchObservedRunningTime="2025-01-17 12:20:27.001115111 +0000 UTC m=+32.351050768" Jan 17 12:20:27.001989 systemd-networkd[1402]: vethac061cb2: Gained IPv6LL Jan 17 12:20:27.011760 kubelet[2547]: I0117 12:20:27.011679 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7xj6p" podStartSLOduration=20.011631022 podStartE2EDuration="20.011631022s" podCreationTimestamp="2025-01-17 12:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:20:27.001707014 +0000 UTC m=+32.351642671" watchObservedRunningTime="2025-01-17 12:20:27.011631022 +0000 UTC m=+32.361566679" Jan 17 12:20:27.759111 kubelet[2547]: E0117 12:20:27.759054 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:27.759607 containerd[1474]: time="2025-01-17T12:20:27.759566338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fsdgt,Uid:d7783a4b-8366-4f69-be9d-8af15b9718d1,Namespace:kube-system,Attempt:0,}" Jan 17 12:20:27.782140 systemd-networkd[1402]: veth25b2246a: Link UP Jan 17 12:20:27.784548 kernel: cni0: port 2(veth25b2246a) entered blocking state Jan 17 12:20:27.784637 kernel: cni0: port 2(veth25b2246a) entered disabled state Jan 17 12:20:27.784663 kernel: veth25b2246a: entered allmulticast mode Jan 17 12:20:27.786077 kernel: veth25b2246a: entered promiscuous mode Jan 17 12:20:27.791987 kernel: cni0: port 2(veth25b2246a) entered blocking state Jan 17 12:20:27.792080 kernel: cni0: port 2(veth25b2246a) entered forwarding state Jan 17 12:20:27.792286 systemd-networkd[1402]: veth25b2246a: Gained carrier Jan 17 12:20:27.796431 containerd[1474]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000ae8e8), "name":"cbr0", "type":"bridge"} Jan 17 12:20:27.796431 containerd[1474]: delegateAdd: netconf sent to delegate plugin: Jan 17 12:20:27.823725 containerd[1474]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-17T12:20:27.823254980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:27.823725 containerd[1474]: time="2025-01-17T12:20:27.823329770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:27.823725 containerd[1474]: time="2025-01-17T12:20:27.823347994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:27.823725 containerd[1474]: time="2025-01-17T12:20:27.823448734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:27.851984 systemd[1]: Started cri-containerd-63c8be9eb9ee0b9b5f06c45be740d6c308a8bc8571c91895e7050457c96df04a.scope - libcontainer container 63c8be9eb9ee0b9b5f06c45be740d6c308a8bc8571c91895e7050457c96df04a. Jan 17 12:20:27.854953 kubelet[2547]: E0117 12:20:27.854918 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:27.870266 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:20:27.899383 containerd[1474]: time="2025-01-17T12:20:27.899314238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fsdgt,Uid:d7783a4b-8366-4f69-be9d-8af15b9718d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"63c8be9eb9ee0b9b5f06c45be740d6c308a8bc8571c91895e7050457c96df04a\"" Jan 17 12:20:27.900396 kubelet[2547]: E0117 12:20:27.900367 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:27.903138 containerd[1474]: time="2025-01-17T12:20:27.903068396Z" level=info msg="CreateContainer within sandbox \"63c8be9eb9ee0b9b5f06c45be740d6c308a8bc8571c91895e7050457c96df04a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:20:27.959555 containerd[1474]: time="2025-01-17T12:20:27.959444989Z" level=info msg="CreateContainer within sandbox \"63c8be9eb9ee0b9b5f06c45be740d6c308a8bc8571c91895e7050457c96df04a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0c95785ec1dc48b236bca7ad285f84d4fb305516aafd429ce4f846ba4201599\"" Jan 17 12:20:27.960066 containerd[1474]: time="2025-01-17T12:20:27.960038185Z" level=info msg="StartContainer for \"c0c95785ec1dc48b236bca7ad285f84d4fb305516aafd429ce4f846ba4201599\"" Jan 17 12:20:27.996970 systemd[1]: Started cri-containerd-c0c95785ec1dc48b236bca7ad285f84d4fb305516aafd429ce4f846ba4201599.scope - libcontainer container c0c95785ec1dc48b236bca7ad285f84d4fb305516aafd429ce4f846ba4201599. Jan 17 12:20:28.031676 containerd[1474]: time="2025-01-17T12:20:28.030245863Z" level=info msg="StartContainer for \"c0c95785ec1dc48b236bca7ad285f84d4fb305516aafd429ce4f846ba4201599\" returns successfully" Jan 17 12:20:28.772981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553402143.mount: Deactivated successfully. Jan 17 12:20:28.790450 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:59566.service - OpenSSH per-connection server daemon (10.0.0.1:59566). Jan 17 12:20:28.848966 sshd[3489]: Accepted publickey for core from 10.0.0.1 port 59566 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:20:28.851246 sshd[3489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:28.856834 systemd-logind[1451]: New session 7 of user core. Jan 17 12:20:28.859130 kubelet[2547]: E0117 12:20:28.859094 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:28.859130 kubelet[2547]: E0117 12:20:28.859220 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:28.862217 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:20:28.873406 kubelet[2547]: I0117 12:20:28.873365 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fsdgt" podStartSLOduration=21.873311724 podStartE2EDuration="21.873311724s" podCreationTimestamp="2025-01-17 12:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:20:28.872889611 +0000 UTC m=+34.222825298" watchObservedRunningTime="2025-01-17 12:20:28.873311724 +0000 UTC m=+34.223247391" Jan 17 12:20:28.995772 sshd[3489]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:29.002137 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:59566.service: Deactivated successfully. Jan 17 12:20:29.004647 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:20:29.005707 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:20:29.007168 systemd-logind[1451]: Removed session 7. Jan 17 12:20:29.626013 systemd-networkd[1402]: veth25b2246a: Gained IPv6LL Jan 17 12:20:33.072607 kubelet[2547]: E0117 12:20:33.072568 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:33.870347 kubelet[2547]: E0117 12:20:33.870293 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:20:34.008599 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:59602.service - OpenSSH per-connection server daemon (10.0.0.1:59602). Jan 17 12:20:34.046348 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 59602 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:20:34.048579 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:34.052669 systemd-logind[1451]: New session 8 of user core. Jan 17 12:20:34.059967 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:20:34.181348 sshd[3531]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:34.185849 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:59602.service: Deactivated successfully. Jan 17 12:20:34.188443 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:20:34.189329 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:20:34.190434 systemd-logind[1451]: Removed session 8. Jan 17 12:20:39.195391 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:38750.service - OpenSSH per-connection server daemon (10.0.0.1:38750). Jan 17 12:20:39.249390 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 38750 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:20:39.251151 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:39.255622 systemd-logind[1451]: New session 9 of user core. Jan 17 12:20:39.265945 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:20:39.378549 sshd[3571]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:39.396038 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:38750.service: Deactivated successfully. Jan 17 12:20:39.398345 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:20:39.400065 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:20:39.408052 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:38758.service - OpenSSH per-connection server daemon (10.0.0.1:38758). Jan 17 12:20:39.408989 systemd-logind[1451]: Removed session 9. Jan 17 12:20:39.440794 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 38758 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:20:39.442178 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:39.446663 systemd-logind[1451]: New session 10 of user core. Jan 17 12:20:39.459160 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:20:39.595807 sshd[3586]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:39.609554 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:38758.service: Deactivated successfully. Jan 17 12:20:39.612425 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:20:39.614405 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:20:39.625369 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:38770.service - OpenSSH per-connection server daemon (10.0.0.1:38770). Jan 17 12:20:39.626713 systemd-logind[1451]: Removed session 10. Jan 17 12:20:39.658166 sshd[3599]: Accepted publickey for core from 10.0.0.1 port 38770 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:20:39.659851 sshd[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:39.664746 systemd-logind[1451]: New session 11 of user core. Jan 17 12:20:39.673925 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:20:39.784354 sshd[3599]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:39.788011 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:38770.service: Deactivated successfully. Jan 17 12:20:39.790330 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:20:39.791876 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:20:39.793400 systemd-logind[1451]: Removed session 11. Jan 17 12:20:44.796417 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:38812.service - OpenSSH per-connection server daemon (10.0.0.1:38812). Jan 17 12:20:44.835445 sshd[3634]: Accepted publickey for core from 10.0.0.1 port 38812 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:20:44.837311 sshd[3634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:44.842066 systemd-logind[1451]: New session 12 of user core. Jan 17 12:20:44.852930 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:20:44.969386 sshd[3634]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:44.973924 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:38812.service: Deactivated successfully. Jan 17 12:20:44.976448 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:20:44.977232 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:20:44.978259 systemd-logind[1451]: Removed session 12. Jan 17 12:20:49.988165 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:43972.service - OpenSSH per-connection server daemon (10.0.0.1:43972). Jan 17 12:20:50.045578 sshd[3670]: Accepted publickey for core from 10.0.0.1 port 43972 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:20:50.047988 sshd[3670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:50.053527 systemd-logind[1451]: New session 13 of user core. Jan 17 12:20:50.060033 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:20:50.174446 sshd[3670]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:50.186025 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:43972.service: Deactivated successfully. Jan 17 12:20:50.188464 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:20:50.190377 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:20:50.199238 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:43984.service - OpenSSH per-connection server daemon (10.0.0.1:43984). Jan 17 12:20:50.200566 systemd-logind[1451]: Removed session 13. Jan 17 12:20:50.234550 sshd[3690]: Accepted publickey for core from 10.0.0.1 port 43984 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:20:50.236651 sshd[3690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:50.241575 systemd-logind[1451]: New session 14 of user core. Jan 17 12:20:50.251011 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:20:50.508148 sshd[3690]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:50.518379 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:43984.service: Deactivated successfully. Jan 17 12:20:50.520474 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:20:50.522253 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:20:50.523664 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:44000.service - OpenSSH per-connection server daemon (10.0.0.1:44000). Jan 17 12:20:50.524761 systemd-logind[1451]: Removed session 14. Jan 17 12:20:50.563032 sshd[3718]: Accepted publickey for core from 10.0.0.1 port 44000 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:20:50.565019 sshd[3718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:50.569856 systemd-logind[1451]: New session 15 of user core. Jan 17 12:20:50.577945 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:20:52.460387 sshd[3718]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:52.467886 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:44000.service: Deactivated successfully. Jan 17 12:20:52.470458 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:20:52.472597 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:20:52.479152 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:44002.service - OpenSSH per-connection server daemon (10.0.0.1:44002). Jan 17 12:20:52.480405 systemd-logind[1451]: Removed session 15. Jan 17 12:20:52.512307 sshd[3749]: Accepted publickey for core from 10.0.0.1 port 44002 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:20:52.513996 sshd[3749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:52.518720 systemd-logind[1451]: New session 16 of user core. Jan 17 12:20:52.524037 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:20:52.808075 sshd[3749]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:52.819821 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:44002.service: Deactivated successfully. Jan 17 12:20:52.822645 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:20:52.824704 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:20:52.834143 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:44018.service - OpenSSH per-connection server daemon (10.0.0.1:44018). Jan 17 12:20:52.835227 systemd-logind[1451]: Removed session 16. Jan 17 12:20:52.870731 sshd[3761]: Accepted publickey for core from 10.0.0.1 port 44018 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:20:52.872803 sshd[3761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:52.877828 systemd-logind[1451]: New session 17 of user core. Jan 17 12:20:52.887006 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:20:53.002029 sshd[3761]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:53.008011 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:44018.service: Deactivated successfully. Jan 17 12:20:53.010327 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:20:53.011280 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:20:53.012798 systemd-logind[1451]: Removed session 17. Jan 17 12:20:58.016942 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:55744.service - OpenSSH per-connection server daemon (10.0.0.1:55744). Jan 17 12:20:58.055884 sshd[3798]: Accepted publickey for core from 10.0.0.1 port 55744 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:20:58.057858 sshd[3798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:58.061909 systemd-logind[1451]: New session 18 of user core. Jan 17 12:20:58.070004 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:20:58.198708 sshd[3798]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:58.203691 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:55744.service: Deactivated successfully. Jan 17 12:20:58.205988 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:20:58.206894 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:20:58.207942 systemd-logind[1451]: Removed session 18. Jan 17 12:21:03.222391 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:55776.service - OpenSSH per-connection server daemon (10.0.0.1:55776). Jan 17 12:21:03.260799 sshd[3836]: Accepted publickey for core from 10.0.0.1 port 55776 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:21:03.262803 sshd[3836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:03.267426 systemd-logind[1451]: New session 19 of user core. Jan 17 12:21:03.283075 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:21:03.405122 sshd[3836]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:03.410654 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:55776.service: Deactivated successfully. Jan 17 12:21:03.413626 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:21:03.414620 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:21:03.415967 systemd-logind[1451]: Removed session 19. Jan 17 12:21:05.759440 kubelet[2547]: E0117 12:21:05.759362 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:21:08.421455 systemd[1]: Started sshd@19-10.0.0.133:22-10.0.0.1:39930.service - OpenSSH per-connection server daemon (10.0.0.1:39930). Jan 17 12:21:08.460971 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 39930 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:21:08.462861 sshd[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:08.467530 systemd-logind[1451]: New session 20 of user core. Jan 17 12:21:08.473966 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:21:08.587355 sshd[3873]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:08.592588 systemd[1]: sshd@19-10.0.0.133:22-10.0.0.1:39930.service: Deactivated successfully. Jan 17 12:21:08.595239 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:21:08.596074 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:21:08.597087 systemd-logind[1451]: Removed session 20. Jan 17 12:21:13.604721 systemd[1]: Started sshd@20-10.0.0.133:22-10.0.0.1:39940.service - OpenSSH per-connection server daemon (10.0.0.1:39940). Jan 17 12:21:13.643463 sshd[3908]: Accepted publickey for core from 10.0.0.1 port 39940 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:21:13.645436 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:13.650552 systemd-logind[1451]: New session 21 of user core. Jan 17 12:21:13.660961 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:21:13.777954 sshd[3908]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:13.782244 systemd[1]: sshd@20-10.0.0.133:22-10.0.0.1:39940.service: Deactivated successfully. Jan 17 12:21:13.784628 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:21:13.785316 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:21:13.786305 systemd-logind[1451]: Removed session 21.