Dec 13 01:36:02.032774 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:36:02.032801 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:36:02.032813 kernel: BIOS-provided physical RAM map: Dec 13 01:36:02.032819 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:36:02.032825 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:36:02.032832 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:36:02.032839 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:36:02.032845 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:36:02.032852 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:36:02.032858 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:36:02.032883 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 01:36:02.032889 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Dec 13 01:36:02.032895 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Dec 13 01:36:02.032902 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Dec 13 01:36:02.032912 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:36:02.032919 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:36:02.032928 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:36:02.032935 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:36:02.032942 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:36:02.032949 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:36:02.032956 kernel: NX (Execute Disable) protection: active Dec 13 01:36:02.032962 kernel: APIC: Static calls initialized Dec 13 01:36:02.032969 kernel: efi: EFI v2.7 by EDK II Dec 13 01:36:02.032976 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Dec 13 01:36:02.032983 kernel: SMBIOS 2.8 present. Dec 13 01:36:02.032989 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 01:36:02.032996 kernel: Hypervisor detected: KVM Dec 13 01:36:02.033005 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:36:02.033012 kernel: kvm-clock: using sched offset of 6061509408 cycles Dec 13 01:36:02.033019 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:36:02.033026 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:36:02.033034 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:36:02.033041 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:36:02.033048 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 01:36:02.033055 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 01:36:02.033064 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:36:02.033074 kernel: Using GB pages for direct mapping Dec 13 01:36:02.033081 kernel: Secure boot disabled Dec 13 01:36:02.033088 kernel: ACPI: Early table checksum verification disabled Dec 13 01:36:02.033095 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 01:36:02.033108 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:36:02.033116 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:36:02.033123 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:36:02.033136 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 01:36:02.033149 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:36:02.033162 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:36:02.033175 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:36:02.033187 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:36:02.033200 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 01:36:02.033212 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 01:36:02.033230 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 01:36:02.033240 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 01:36:02.033248 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 01:36:02.033255 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 01:36:02.033262 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 01:36:02.033269 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 01:36:02.033277 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 01:36:02.033287 kernel: No NUMA configuration found Dec 13 01:36:02.033302 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 01:36:02.033312 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 01:36:02.033319 kernel: Zone ranges: Dec 13 01:36:02.033327 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:36:02.033334 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 01:36:02.033341 kernel: Normal empty Dec 13 01:36:02.033348 kernel: Movable zone start for each node Dec 13 01:36:02.033355 kernel: Early memory node ranges Dec 13 01:36:02.033362 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:36:02.033370 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 01:36:02.033377 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 01:36:02.033387 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 01:36:02.033394 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 01:36:02.033401 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 01:36:02.033410 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 01:36:02.033418 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:36:02.033425 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:36:02.033432 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 01:36:02.033439 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:36:02.033446 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 01:36:02.033459 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 01:36:02.033467 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 01:36:02.033474 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:36:02.033481 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:36:02.033489 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:36:02.033496 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:36:02.033503 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:36:02.033511 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:36:02.033518 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:36:02.033528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:36:02.033535 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:36:02.033543 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:36:02.033550 kernel: TSC deadline timer available Dec 13 01:36:02.033557 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:36:02.033564 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:36:02.033571 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:36:02.033579 kernel: kvm-guest: setup PV sched yield Dec 13 01:36:02.033586 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:36:02.033598 kernel: Booting paravirtualized kernel on KVM Dec 13 01:36:02.033605 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:36:02.033613 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:36:02.033631 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:36:02.033644 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:36:02.033654 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:36:02.033664 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:36:02.033673 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:36:02.033688 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:36:02.033704 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:36:02.033711 kernel: random: crng init done Dec 13 01:36:02.033718 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:36:02.033726 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:36:02.033733 kernel: Fallback order for Node 0: 0 Dec 13 01:36:02.033740 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 01:36:02.033748 kernel: Policy zone: DMA32 Dec 13 01:36:02.033755 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:36:02.033765 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Dec 13 01:36:02.033773 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:36:02.033780 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:36:02.033787 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:36:02.033795 kernel: Dynamic Preempt: voluntary Dec 13 01:36:02.033810 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:36:02.033822 kernel: rcu: RCU event tracing is enabled. Dec 13 01:36:02.033830 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:36:02.033837 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:36:02.033845 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:36:02.033853 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:36:02.033874 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:36:02.033887 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:36:02.033895 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:36:02.033906 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:36:02.033914 kernel: Console: colour dummy device 80x25 Dec 13 01:36:02.033921 kernel: printk: console [ttyS0] enabled Dec 13 01:36:02.033932 kernel: ACPI: Core revision 20230628 Dec 13 01:36:02.033940 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:36:02.033948 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:36:02.033955 kernel: x2apic enabled Dec 13 01:36:02.033969 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:36:02.033977 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:36:02.033994 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:36:02.034003 kernel: kvm-guest: setup PV IPIs Dec 13 01:36:02.034011 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:36:02.034023 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:36:02.034031 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:36:02.034039 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:36:02.034046 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:36:02.034054 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:36:02.034062 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:36:02.034069 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:36:02.034077 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:36:02.034084 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:36:02.034097 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:36:02.034104 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:36:02.034115 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:36:02.034122 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:36:02.034130 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:36:02.034140 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:36:02.034148 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:36:02.034156 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:36:02.034166 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:36:02.034174 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:36:02.034182 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:36:02.034189 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:36:02.034197 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:36:02.034205 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:36:02.034213 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:36:02.034220 kernel: landlock: Up and running. Dec 13 01:36:02.034228 kernel: SELinux: Initializing. Dec 13 01:36:02.034238 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:36:02.034246 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:36:02.034254 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:36:02.034262 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:36:02.034270 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:36:02.034277 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:36:02.034285 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:36:02.034301 kernel: ... version: 0 Dec 13 01:36:02.034309 kernel: ... bit width: 48 Dec 13 01:36:02.034319 kernel: ... generic registers: 6 Dec 13 01:36:02.034326 kernel: ... value mask: 0000ffffffffffff Dec 13 01:36:02.034334 kernel: ... max period: 00007fffffffffff Dec 13 01:36:02.034342 kernel: ... fixed-purpose events: 0 Dec 13 01:36:02.034349 kernel: ... event mask: 000000000000003f Dec 13 01:36:02.034357 kernel: signal: max sigframe size: 1776 Dec 13 01:36:02.034364 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:36:02.034372 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:36:02.034380 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:36:02.034390 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:36:02.034397 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:36:02.034405 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:36:02.034413 kernel: smpboot: Max logical packages: 1 Dec 13 01:36:02.034420 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:36:02.034428 kernel: devtmpfs: initialized Dec 13 01:36:02.034435 kernel: x86/mm: Memory block size: 128MB Dec 13 01:36:02.034443 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 01:36:02.034451 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 01:36:02.034461 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 01:36:02.034469 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 01:36:02.034477 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 01:36:02.034484 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:36:02.034492 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:36:02.034500 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:36:02.034511 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:36:02.034522 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:36:02.034532 kernel: audit: type=2000 audit(1734053761.179:1): state=initialized audit_enabled=0 res=1 Dec 13 01:36:02.034549 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:36:02.034556 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:36:02.034572 kernel: cpuidle: using governor menu Dec 13 01:36:02.034580 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:36:02.034587 kernel: dca service started, version 1.12.1 Dec 13 01:36:02.034595 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:36:02.034603 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:36:02.034610 kernel: PCI: Using configuration type 1 for base access Dec 13 01:36:02.034618 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:36:02.034628 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:36:02.034636 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:36:02.034644 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:36:02.034651 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:36:02.034659 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:36:02.034666 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:36:02.034674 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:36:02.034682 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:36:02.034689 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:36:02.034700 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:36:02.034707 kernel: ACPI: Interpreter enabled Dec 13 01:36:02.034715 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:36:02.034722 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:36:02.034731 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:36:02.034742 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:36:02.034753 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:36:02.034763 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:36:02.035072 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:36:02.035237 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:36:02.035382 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:36:02.035393 kernel: PCI host bridge to bus 0000:00 Dec 13 01:36:02.035537 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:36:02.035657 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:36:02.035775 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:36:02.035919 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:36:02.036041 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:36:02.036157 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 01:36:02.036276 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:36:02.036479 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:36:02.036818 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:36:02.036988 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 01:36:02.037151 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 01:36:02.037312 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 01:36:02.037465 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 01:36:02.037627 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:36:02.037803 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:36:02.037985 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 01:36:02.038141 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 01:36:02.038300 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 01:36:02.038475 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:36:02.038615 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 01:36:02.038780 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 01:36:02.038968 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 01:36:02.039150 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:36:02.039301 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 01:36:02.039431 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 01:36:02.039557 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 01:36:02.039685 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 01:36:02.039830 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:36:02.039998 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:36:02.040145 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:36:02.040278 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 01:36:02.040430 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 01:36:02.040579 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:36:02.040707 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 01:36:02.040718 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:36:02.040726 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:36:02.040734 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:36:02.040746 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:36:02.040754 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:36:02.040762 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:36:02.040770 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:36:02.040778 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:36:02.040785 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:36:02.040793 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:36:02.040801 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:36:02.040808 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:36:02.040819 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:36:02.040826 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:36:02.040834 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:36:02.040843 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:36:02.040852 kernel: iommu: Default domain type: Translated Dec 13 01:36:02.040914 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:36:02.040924 kernel: efivars: Registered efivars operations Dec 13 01:36:02.040932 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:36:02.040939 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:36:02.040951 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 01:36:02.040959 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 01:36:02.040966 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 01:36:02.040974 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 01:36:02.041104 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:36:02.041236 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:36:02.041391 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:36:02.041418 kernel: vgaarb: loaded Dec 13 01:36:02.041436 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:36:02.041449 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:36:02.041457 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:36:02.041465 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:36:02.041473 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:36:02.041481 kernel: pnp: PnP ACPI init Dec 13 01:36:02.041639 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:36:02.041652 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:36:02.041660 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:36:02.041672 kernel: NET: Registered PF_INET protocol family Dec 13 01:36:02.041681 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:36:02.041689 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:36:02.041697 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:36:02.041705 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:36:02.041713 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:36:02.041721 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:36:02.041729 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:36:02.041737 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:36:02.041747 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:36:02.041755 kernel: NET: Registered PF_XDP protocol family Dec 13 01:36:02.041968 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 01:36:02.042133 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 01:36:02.042276 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:36:02.042427 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:36:02.042569 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:36:02.042692 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:36:02.042841 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:36:02.043005 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 01:36:02.043018 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:36:02.043026 kernel: Initialise system trusted keyrings Dec 13 01:36:02.043035 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:36:02.043043 kernel: Key type asymmetric registered Dec 13 01:36:02.043050 kernel: Asymmetric key parser 'x509' registered Dec 13 01:36:02.043058 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:36:02.043066 kernel: io scheduler mq-deadline registered Dec 13 01:36:02.043080 kernel: io scheduler kyber registered Dec 13 01:36:02.043088 kernel: io scheduler bfq registered Dec 13 01:36:02.043096 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:36:02.043104 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:36:02.043112 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:36:02.043120 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:36:02.043129 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:36:02.043138 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:36:02.043146 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:36:02.043157 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:36:02.043165 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:36:02.043317 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:36:02.043442 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:36:02.043562 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:36:01 UTC (1734053761) Dec 13 01:36:02.043681 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:36:02.043691 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 13 01:36:02.043700 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:36:02.043712 kernel: efifb: probing for efifb Dec 13 01:36:02.043720 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Dec 13 01:36:02.043731 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Dec 13 01:36:02.043738 kernel: efifb: scrolling: redraw Dec 13 01:36:02.043746 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Dec 13 01:36:02.043754 kernel: Console: switching to colour frame buffer device 100x37 Dec 13 01:36:02.043780 kernel: fb0: EFI VGA frame buffer device Dec 13 01:36:02.043790 kernel: pstore: Using crash dump compression: deflate Dec 13 01:36:02.043799 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:36:02.043809 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:36:02.043817 kernel: Segment Routing with IPv6 Dec 13 01:36:02.043825 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:36:02.043833 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:36:02.043841 kernel: Key type dns_resolver registered Dec 13 01:36:02.043849 kernel: IPI shorthand broadcast: enabled Dec 13 01:36:02.043857 kernel: sched_clock: Marking stable (1128002482, 177290861)->(1439031207, -133737864) Dec 13 01:36:02.043879 kernel: registered taskstats version 1 Dec 13 01:36:02.043887 kernel: Loading compiled-in X.509 certificates Dec 13 01:36:02.043898 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:36:02.043906 kernel: Key type .fscrypt registered Dec 13 01:36:02.043914 kernel: Key type fscrypt-provisioning registered Dec 13 01:36:02.043922 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:36:02.043930 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:36:02.043938 kernel: ima: No architecture policies found Dec 13 01:36:02.043946 kernel: clk: Disabling unused clocks Dec 13 01:36:02.043954 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:36:02.043965 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:36:02.043974 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:36:02.043983 kernel: Run /init as init process Dec 13 01:36:02.043990 kernel: with arguments: Dec 13 01:36:02.043999 kernel: /init Dec 13 01:36:02.044006 kernel: with environment: Dec 13 01:36:02.044015 kernel: HOME=/ Dec 13 01:36:02.044022 kernel: TERM=linux Dec 13 01:36:02.044032 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:36:02.044067 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:36:02.044091 systemd[1]: Detected virtualization kvm. Dec 13 01:36:02.044106 systemd[1]: Detected architecture x86-64. Dec 13 01:36:02.044116 systemd[1]: Running in initrd. Dec 13 01:36:02.044135 systemd[1]: No hostname configured, using default hostname. Dec 13 01:36:02.044146 systemd[1]: Hostname set to . Dec 13 01:36:02.044158 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:36:02.044170 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:36:02.044181 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:36:02.044191 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:36:02.044201 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:36:02.044212 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:36:02.044225 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:36:02.044233 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:36:02.044244 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:36:02.044252 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:36:02.044261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:36:02.044270 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:36:02.044279 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:36:02.044298 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:36:02.044306 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:36:02.044315 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:36:02.044323 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:36:02.044332 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:36:02.044341 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:36:02.044349 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:36:02.044358 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:36:02.044366 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:36:02.044377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:36:02.044386 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:36:02.044394 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:36:02.044403 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:36:02.044411 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:36:02.044419 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:36:02.044428 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:36:02.044436 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:36:02.044447 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:36:02.044456 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:36:02.044464 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:36:02.044473 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:36:02.044507 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 01:36:02.044531 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:36:02.044540 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:36:02.044559 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:36:02.044568 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:36:02.044581 systemd-journald[193]: Journal started Dec 13 01:36:02.044600 systemd-journald[193]: Runtime Journal (/run/log/journal/bd4cdd3ed2f44e38a0dff6dc685abac3) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:36:02.035002 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:36:02.056403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:36:02.056433 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:36:02.062036 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:36:02.068006 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:36:02.080894 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:36:02.082106 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:36:02.085904 kernel: Bridge firewalling registered Dec 13 01:36:02.085923 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:36:02.086606 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:36:02.090701 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:36:02.094050 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:36:02.095535 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:36:02.102755 dracut-cmdline[221]: dracut-dracut-053 Dec 13 01:36:02.105710 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:36:02.109225 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:36:02.126543 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:36:02.170887 systemd-resolved[246]: Positive Trust Anchors: Dec 13 01:36:02.170916 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:36:02.170947 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:36:02.173636 systemd-resolved[246]: Defaulting to hostname 'linux'. Dec 13 01:36:02.174981 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:36:02.181215 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:36:02.207926 kernel: SCSI subsystem initialized Dec 13 01:36:02.216898 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:36:02.227928 kernel: iscsi: registered transport (tcp) Dec 13 01:36:02.249896 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:36:02.249968 kernel: QLogic iSCSI HBA Driver Dec 13 01:36:02.302390 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:36:02.324089 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:36:02.367201 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:36:02.367309 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:36:02.368331 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:36:02.417930 kernel: raid6: avx2x4 gen() 20181 MB/s Dec 13 01:36:02.434968 kernel: raid6: avx2x2 gen() 19659 MB/s Dec 13 01:36:02.452294 kernel: raid6: avx2x1 gen() 16532 MB/s Dec 13 01:36:02.452382 kernel: raid6: using algorithm avx2x4 gen() 20181 MB/s Dec 13 01:36:02.470377 kernel: raid6: .... xor() 5238 MB/s, rmw enabled Dec 13 01:36:02.470490 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:36:02.496918 kernel: xor: automatically using best checksumming function avx Dec 13 01:36:02.673900 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:36:02.686556 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:36:02.697137 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:36:02.715639 systemd-udevd[412]: Using default interface naming scheme 'v255'. Dec 13 01:36:02.722554 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:36:02.732084 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:36:02.753022 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Dec 13 01:36:02.800777 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:36:02.813178 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:36:02.894228 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:36:02.906160 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:36:02.924547 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:36:02.925796 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:36:02.930131 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:36:02.932387 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:36:02.941117 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:36:02.949905 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:36:02.979296 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:36:02.979543 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:36:02.979556 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:36:02.979568 kernel: GPT:9289727 != 19775487 Dec 13 01:36:02.979578 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:36:02.979613 kernel: GPT:9289727 != 19775487 Dec 13 01:36:02.979627 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:36:02.979637 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:36:02.979648 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:36:02.979660 kernel: AES CTR mode by8 optimization enabled Dec 13 01:36:02.953076 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:36:02.988982 kernel: libata version 3.00 loaded. Dec 13 01:36:02.994034 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:36:02.995593 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:36:03.004556 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:36:03.042409 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:36:03.042435 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:36:03.042641 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:36:03.042842 kernel: scsi host0: ahci Dec 13 01:36:03.043416 kernel: scsi host1: ahci Dec 13 01:36:03.043637 kernel: scsi host2: ahci Dec 13 01:36:03.043840 kernel: scsi host3: ahci Dec 13 01:36:03.044092 kernel: scsi host4: ahci Dec 13 01:36:03.044307 kernel: scsi host5: ahci Dec 13 01:36:03.044526 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 01:36:03.044544 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 01:36:03.044558 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 01:36:03.044573 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 01:36:03.044588 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (458) Dec 13 01:36:03.044603 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 01:36:03.044617 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 01:36:03.044632 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (471) Dec 13 01:36:03.000682 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:36:03.004445 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:36:03.004730 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:36:03.006525 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:36:03.019234 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:36:03.043484 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:36:03.047902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:36:03.063224 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:36:03.074029 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:36:03.079487 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:36:03.080961 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:36:03.100196 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:36:03.101917 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:36:03.102056 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:36:03.105563 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:36:03.108347 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:36:03.115577 disk-uuid[553]: Primary Header is updated. Dec 13 01:36:03.115577 disk-uuid[553]: Secondary Entries is updated. Dec 13 01:36:03.115577 disk-uuid[553]: Secondary Header is updated. Dec 13 01:36:03.120906 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:36:03.126900 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:36:03.138605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:36:03.148172 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:36:03.189532 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:36:03.343396 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:36:03.343486 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:36:03.343498 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:36:03.343509 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:36:03.344900 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:36:03.345918 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:36:03.345954 kernel: ata3.00: applying bridge limits Dec 13 01:36:03.346900 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:36:03.347935 kernel: ata3.00: configured for UDMA/100 Dec 13 01:36:03.349910 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:36:03.400940 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:36:03.414185 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:36:03.414211 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:36:04.128909 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:36:04.129027 disk-uuid[555]: The operation has completed successfully. Dec 13 01:36:04.159351 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:36:04.159527 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:36:04.187271 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:36:04.191789 sh[596]: Success Dec 13 01:36:04.206899 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:36:04.247174 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:36:04.259124 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:36:04.263440 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:36:04.283587 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:36:04.283646 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:36:04.283657 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:36:04.284612 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:36:04.285359 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:36:04.290527 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:36:04.291755 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:36:04.304076 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:36:04.307052 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:36:04.321932 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:36:04.322010 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:36:04.322025 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:36:04.325949 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:36:04.339096 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:36:04.341030 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:36:04.354987 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:36:04.362109 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:36:04.417277 ignition[698]: Ignition 2.19.0 Dec 13 01:36:04.417289 ignition[698]: Stage: fetch-offline Dec 13 01:36:04.417355 ignition[698]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:36:04.417366 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:36:04.417471 ignition[698]: parsed url from cmdline: "" Dec 13 01:36:04.417475 ignition[698]: no config URL provided Dec 13 01:36:04.417481 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:36:04.417491 ignition[698]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:36:04.417518 ignition[698]: op(1): [started] loading QEMU firmware config module Dec 13 01:36:04.417524 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:36:04.428382 ignition[698]: op(1): [finished] loading QEMU firmware config module Dec 13 01:36:04.443514 ignition[698]: parsing config with SHA512: 90e93e3a3e37453ed63b27066269f8b0e141b040ac05fade353208a7ba3bfc861358e94b765d36b2e5baa2048f4f58f7359b89bd35677698d54557588e26dc03 Dec 13 01:36:04.450491 unknown[698]: fetched base config from "system" Dec 13 01:36:04.450521 unknown[698]: fetched user config from "qemu" Dec 13 01:36:04.452396 ignition[698]: fetch-offline: fetch-offline passed Dec 13 01:36:04.452536 ignition[698]: Ignition finished successfully Dec 13 01:36:04.455142 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:36:04.457510 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:36:04.475119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:36:04.498486 systemd-networkd[786]: lo: Link UP Dec 13 01:36:04.498498 systemd-networkd[786]: lo: Gained carrier Dec 13 01:36:04.500299 systemd-networkd[786]: Enumeration completed Dec 13 01:36:04.500445 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:36:04.500758 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:36:04.500764 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:36:04.501853 systemd-networkd[786]: eth0: Link UP Dec 13 01:36:04.501900 systemd-networkd[786]: eth0: Gained carrier Dec 13 01:36:04.501948 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:36:04.502729 systemd[1]: Reached target network.target - Network. Dec 13 01:36:04.505101 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:36:04.516158 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:36:04.529468 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:36:04.533796 ignition[789]: Ignition 2.19.0 Dec 13 01:36:04.533808 ignition[789]: Stage: kargs Dec 13 01:36:04.534014 ignition[789]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:36:04.534027 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:36:04.538034 ignition[789]: kargs: kargs passed Dec 13 01:36:04.538083 ignition[789]: Ignition finished successfully Dec 13 01:36:04.542481 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:36:04.553025 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:36:04.568908 ignition[798]: Ignition 2.19.0 Dec 13 01:36:04.568925 ignition[798]: Stage: disks Dec 13 01:36:04.569143 ignition[798]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:36:04.569156 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:36:04.585689 ignition[798]: disks: disks passed Dec 13 01:36:04.585801 ignition[798]: Ignition finished successfully Dec 13 01:36:04.589590 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:36:04.591799 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:36:04.592257 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:36:04.592581 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:36:04.593083 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:36:04.593421 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:36:04.611286 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:36:04.624330 systemd-resolved[246]: Detected conflict on linux IN A 10.0.0.115 Dec 13 01:36:04.624352 systemd-resolved[246]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Dec 13 01:36:04.629329 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:36:04.637517 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:36:04.645075 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:36:04.792904 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:36:04.794075 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:36:04.795451 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:36:04.805032 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:36:04.807197 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:36:04.809930 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:36:04.809994 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:36:04.816527 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Dec 13 01:36:04.810027 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:36:04.820383 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:36:04.820405 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:36:04.820419 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:36:04.825996 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:36:04.828636 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:36:04.830519 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:36:04.833261 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:36:04.877396 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:36:04.883514 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:36:04.888654 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:36:04.894810 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:36:04.998022 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:36:05.009039 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:36:05.021220 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:36:05.034946 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:36:05.048896 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:36:05.130051 ignition[932]: INFO : Ignition 2.19.0 Dec 13 01:36:05.130051 ignition[932]: INFO : Stage: mount Dec 13 01:36:05.138332 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:36:05.138332 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:36:05.138332 ignition[932]: INFO : mount: mount passed Dec 13 01:36:05.138332 ignition[932]: INFO : Ignition finished successfully Dec 13 01:36:05.144109 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:36:05.155950 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:36:05.283480 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:36:05.303187 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:36:05.361897 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Dec 13 01:36:05.361944 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:36:05.363924 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:36:05.363958 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:36:05.366889 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:36:05.368807 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:36:05.400295 ignition[958]: INFO : Ignition 2.19.0 Dec 13 01:36:05.400295 ignition[958]: INFO : Stage: files Dec 13 01:36:05.422625 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:36:05.422625 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:36:05.422625 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:36:05.422625 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:36:05.422625 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:36:05.429705 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:36:05.431474 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:36:05.433047 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:36:05.432105 unknown[958]: wrote ssh authorized keys file for user: core Dec 13 01:36:05.436394 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:36:05.439235 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:36:05.499748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:36:05.921533 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:36:05.921533 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:36:05.926316 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:36:05.926316 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:36:05.926316 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:36:05.926316 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:36:05.926316 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:36:05.926316 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:36:05.926316 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:36:05.926316 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:36:05.926316 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:36:05.926316 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:36:05.926316 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:36:05.926316 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:36:05.926316 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 01:36:06.175265 systemd-networkd[786]: eth0: Gained IPv6LL Dec 13 01:36:06.384413 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:36:07.125519 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:36:07.125519 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:36:07.160678 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:36:07.160678 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:36:07.160678 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:36:07.160678 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:36:07.160678 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:36:07.160678 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:36:07.160678 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:36:07.160678 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:36:07.256299 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:36:07.263986 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:36:07.314648 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:36:07.314648 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:36:07.314648 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:36:07.314648 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:36:07.314648 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:36:07.314648 ignition[958]: INFO : files: files passed Dec 13 01:36:07.314648 ignition[958]: INFO : Ignition finished successfully Dec 13 01:36:07.267764 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:36:07.328236 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:36:07.330653 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:36:07.333568 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:36:07.333701 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:36:07.343503 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:36:07.346430 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:36:07.346430 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:36:07.349708 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:36:07.349941 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:36:07.352753 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:36:07.362028 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:36:07.448577 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:36:07.448795 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:36:07.520239 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:36:07.521629 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:36:07.523858 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:36:07.540307 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:36:07.584386 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:36:07.588758 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:36:07.607690 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:36:07.630912 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:36:07.632751 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:36:07.634675 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:36:07.634889 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:36:07.637446 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:36:07.639258 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:36:07.641223 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:36:07.643596 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:36:07.645852 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:36:07.648080 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:36:07.650287 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:36:07.652852 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:36:07.661983 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:36:07.663859 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:36:07.665926 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:36:07.666117 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:36:07.668257 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:36:07.670103 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:36:07.672044 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:36:07.672235 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:36:07.674204 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:36:07.674346 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:36:07.730492 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:36:07.730647 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:36:07.732160 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:36:07.734185 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:36:07.734321 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:36:07.786257 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:36:07.788536 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:36:07.790535 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:36:07.790649 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:36:07.792602 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:36:07.792696 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:36:07.794704 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:36:07.794836 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:36:07.797104 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:36:07.797226 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:36:07.853233 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:36:07.856768 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:36:07.857980 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:36:07.858165 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:36:07.860520 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:36:07.860695 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:36:07.868077 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:36:07.868231 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:36:07.875163 ignition[1013]: INFO : Ignition 2.19.0 Dec 13 01:36:07.875163 ignition[1013]: INFO : Stage: umount Dec 13 01:36:07.875163 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:36:07.875163 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:36:07.881250 ignition[1013]: INFO : umount: umount passed Dec 13 01:36:07.881250 ignition[1013]: INFO : Ignition finished successfully Dec 13 01:36:07.878541 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:36:07.878675 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:36:07.881494 systemd[1]: Stopped target network.target - Network. Dec 13 01:36:07.883348 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:36:07.883412 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:36:07.885996 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:36:07.886053 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:36:07.888305 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:36:07.888361 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:36:07.888849 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:36:07.888958 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:36:07.889373 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:36:07.889837 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:36:07.891354 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:36:07.897741 systemd-networkd[786]: eth0: DHCPv6 lease lost Dec 13 01:36:07.899877 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:36:07.900107 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:36:07.904575 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:36:07.904793 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:36:07.907638 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:36:07.907734 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:36:07.918355 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:36:07.920275 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:36:07.920369 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:36:07.921358 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:36:07.921448 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:36:07.921817 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:36:07.921877 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:36:07.922158 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:36:07.922205 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:36:07.922606 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:36:07.951686 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:36:07.951963 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:36:07.953770 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:36:07.953976 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:36:07.957257 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:36:07.957348 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:36:07.968832 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:36:07.968896 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:36:07.970147 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:36:07.970202 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:36:07.971047 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:36:07.971098 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:36:07.971836 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:36:07.971899 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:36:07.984027 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:36:07.985363 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:36:07.985421 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:36:08.021252 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:36:08.021325 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:36:08.023546 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:36:08.023613 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:36:08.024156 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:36:08.024216 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:36:08.025103 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:36:08.025251 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:36:08.241948 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:36:08.242161 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:36:08.244690 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:36:08.246782 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:36:08.247007 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:36:08.258188 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:36:08.269271 systemd[1]: Switching root. Dec 13 01:36:08.305640 systemd-journald[193]: Journal stopped Dec 13 01:36:09.837004 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 01:36:09.837098 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:36:09.837113 kernel: SELinux: policy capability open_perms=1 Dec 13 01:36:09.837125 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:36:09.837136 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:36:09.837147 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:36:09.837165 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:36:09.837177 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:36:09.837188 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:36:09.837208 kernel: audit: type=1403 audit(1734053768.846:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:36:09.837221 systemd[1]: Successfully loaded SELinux policy in 56.256ms. Dec 13 01:36:09.837242 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.132ms. Dec 13 01:36:09.837258 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:36:09.837278 systemd[1]: Detected virtualization kvm. Dec 13 01:36:09.837291 systemd[1]: Detected architecture x86-64. Dec 13 01:36:09.837303 systemd[1]: Detected first boot. Dec 13 01:36:09.837315 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:36:09.837328 zram_generator::config[1057]: No configuration found. Dec 13 01:36:09.837345 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:36:09.837357 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:36:09.837369 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:36:09.837381 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:36:09.837401 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:36:09.837413 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:36:09.837425 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:36:09.837443 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:36:09.837458 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:36:09.837470 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:36:09.837483 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:36:09.837495 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:36:09.837507 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:36:09.837519 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:36:09.837531 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:36:09.837544 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:36:09.837562 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:36:09.837578 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:36:09.837589 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:36:09.837601 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:36:09.837613 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:36:09.837625 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:36:09.837637 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:36:09.837649 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:36:09.837664 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:36:09.837676 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:36:09.837689 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:36:09.837701 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:36:09.837713 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:36:09.837725 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:36:09.837737 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:36:09.837749 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:36:09.837762 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:36:09.837773 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:36:09.837788 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:36:09.837800 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:36:09.837812 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:36:09.837825 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:09.837838 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:36:09.837849 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:36:09.837875 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:36:09.837888 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:36:09.837904 systemd[1]: Reached target machines.target - Containers. Dec 13 01:36:09.837916 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:36:09.837928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:36:09.837941 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:36:09.837954 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:36:09.837966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:36:09.837978 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:36:09.837990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:36:09.838002 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:36:09.838017 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:36:09.838029 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:36:09.838041 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:36:09.838053 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:36:09.838073 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:36:09.838085 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:36:09.838097 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:36:09.838110 kernel: fuse: init (API version 7.39) Dec 13 01:36:09.838124 kernel: loop: module loaded Dec 13 01:36:09.838136 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:36:09.838149 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:36:09.838163 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:36:09.838178 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:36:09.838194 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:36:09.838210 systemd[1]: Stopped verity-setup.service. Dec 13 01:36:09.838249 systemd-journald[1127]: Collecting audit messages is disabled. Dec 13 01:36:09.838284 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:09.838301 kernel: ACPI: bus type drm_connector registered Dec 13 01:36:09.838315 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:36:09.838328 systemd-journald[1127]: Journal started Dec 13 01:36:09.838353 systemd-journald[1127]: Runtime Journal (/run/log/journal/bd4cdd3ed2f44e38a0dff6dc685abac3) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:36:09.516165 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:36:09.536100 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:36:09.536644 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:36:09.537090 systemd[1]: systemd-journald.service: Consumed 1.364s CPU time. Dec 13 01:36:09.841892 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:36:09.843593 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:36:09.845137 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:36:09.846304 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:36:09.847526 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:36:09.848962 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:36:09.850399 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:36:09.852147 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:36:09.853966 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:36:09.854177 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:36:09.855960 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:36:09.856216 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:36:09.858099 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:36:09.858385 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:36:09.859992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:36:09.860228 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:36:09.862028 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:36:09.862290 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:36:09.863792 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:36:09.863995 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:36:09.865443 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:36:09.866936 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:36:09.868622 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:36:09.891561 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:36:09.904119 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:36:09.907529 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:36:09.908854 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:36:09.908906 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:36:09.911374 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:36:09.914290 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:36:09.918998 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:36:09.920979 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:36:09.923392 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:36:09.926422 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:36:09.927976 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:36:09.930311 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:36:09.931579 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:36:09.937101 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:36:09.941052 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:36:10.032758 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:36:10.035145 systemd-journald[1127]: Time spent on flushing to /var/log/journal/bd4cdd3ed2f44e38a0dff6dc685abac3 is 28.504ms for 1000 entries. Dec 13 01:36:10.035145 systemd-journald[1127]: System Journal (/var/log/journal/bd4cdd3ed2f44e38a0dff6dc685abac3) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:36:10.078566 systemd-journald[1127]: Received client request to flush runtime journal. Dec 13 01:36:10.078633 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:36:09.948288 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:36:09.952364 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:36:09.954005 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:36:09.956312 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:36:10.029810 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:36:10.032423 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:36:10.041644 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:36:10.054255 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:36:10.063091 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:36:10.065013 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:36:10.081918 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:36:10.090093 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 01:36:10.177231 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:36:10.227174 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Dec 13 01:36:10.227196 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Dec 13 01:36:10.237644 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:36:10.245112 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:36:10.249157 kernel: loop2: detected capacity change from 0 to 205544 Dec 13 01:36:10.289344 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:36:10.334914 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:36:10.340896 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:36:10.345813 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:36:10.347391 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:36:10.359856 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Dec 13 01:36:10.359905 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Dec 13 01:36:10.365713 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:36:10.367179 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:36:10.451894 kernel: loop5: detected capacity change from 0 to 205544 Dec 13 01:36:10.459969 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:36:10.460653 (sd-merge)[1195]: Merged extensions into '/usr'. Dec 13 01:36:10.468466 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:36:10.468488 systemd[1]: Reloading... Dec 13 01:36:10.585909 zram_generator::config[1224]: No configuration found. Dec 13 01:36:10.868997 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:36:10.926336 systemd[1]: Reloading finished in 457 ms. Dec 13 01:36:10.968817 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:36:10.992721 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:36:11.010206 systemd[1]: Starting ensure-sysext.service... Dec 13 01:36:11.018509 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:36:11.025342 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:36:11.036305 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:36:11.036319 systemd[1]: Reloading... Dec 13 01:36:11.069898 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:36:11.070566 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:36:11.072156 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:36:11.072616 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Dec 13 01:36:11.072738 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Dec 13 01:36:11.089075 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:36:11.089100 systemd-tmpfiles[1261]: Skipping /boot Dec 13 01:36:11.108615 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:36:11.108641 systemd-tmpfiles[1261]: Skipping /boot Dec 13 01:36:11.110896 zram_generator::config[1290]: No configuration found. Dec 13 01:36:11.308241 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:36:11.358140 systemd[1]: Reloading finished in 321 ms. Dec 13 01:36:11.377641 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:36:11.428731 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:36:11.431682 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:36:11.434108 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:36:11.440043 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:36:11.444104 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:36:11.457946 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:36:11.460694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:11.460890 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:36:11.465169 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:36:11.468120 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:36:11.475038 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:36:11.501228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:36:11.501476 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:11.503028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:36:11.503297 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:36:11.505276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:36:11.505507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:36:11.507447 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:36:11.507666 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:36:11.514480 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:36:11.514705 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:36:11.517424 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:36:11.520190 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:11.520519 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:36:11.540402 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:36:11.554929 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:36:11.567806 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:36:11.579162 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:36:11.579603 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:11.581317 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:36:11.582979 augenrules[1363]: No rules Dec 13 01:36:11.583982 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:36:11.586176 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:36:11.588508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:36:11.588751 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:36:11.590905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:36:11.591154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:36:11.610329 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:36:11.610587 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:36:11.621916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:11.622229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:36:11.637246 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:36:11.639888 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:36:11.657688 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:36:11.667151 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:36:11.668941 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:36:11.669098 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:11.670165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:36:11.670953 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:36:11.673466 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:36:11.673720 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:36:11.675662 systemd-resolved[1331]: Positive Trust Anchors: Dec 13 01:36:11.675687 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:36:11.675721 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:36:11.676086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:36:11.676296 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:36:11.678375 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:36:11.678624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:36:11.679632 systemd-resolved[1331]: Defaulting to hostname 'linux'. Dec 13 01:36:11.682258 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:36:11.684320 systemd[1]: Finished ensure-sysext.service. Dec 13 01:36:11.707471 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:36:11.709295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:36:11.709364 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:36:11.717373 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:36:11.719325 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:36:11.722482 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:36:11.742352 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:36:11.751280 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:36:11.755028 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:36:11.773877 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:36:11.784322 systemd-udevd[1386]: Using default interface naming scheme 'v255'. Dec 13 01:36:11.804502 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:36:11.820294 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:36:11.827766 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:36:11.839928 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:36:11.880851 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:36:11.885895 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1404) Dec 13 01:36:11.937960 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1404) Dec 13 01:36:12.012907 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:36:12.016922 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 13 01:36:12.072674 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1400) Dec 13 01:36:12.072804 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:36:12.077792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:36:12.094245 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 01:36:12.094745 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:36:12.094960 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:36:12.095227 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:36:12.100430 systemd-networkd[1409]: lo: Link UP Dec 13 01:36:12.100465 systemd-networkd[1409]: lo: Gained carrier Dec 13 01:36:12.103822 systemd-networkd[1409]: Enumeration completed Dec 13 01:36:12.104292 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:36:12.104297 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:36:12.107964 systemd-networkd[1409]: eth0: Link UP Dec 13 01:36:12.107969 systemd-networkd[1409]: eth0: Gained carrier Dec 13 01:36:12.107989 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:36:12.184226 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:36:12.185926 systemd[1]: Reached target network.target - Network. Dec 13 01:36:12.197056 systemd-networkd[1409]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:36:12.197803 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:36:12.198250 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Dec 13 01:36:13.081586 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:36:13.081644 systemd-timesyncd[1383]: Initial clock synchronization to Fri 2024-12-13 01:36:13.081289 UTC. Dec 13 01:36:13.081881 systemd-resolved[1331]: Clock change detected. Flushing caches. Dec 13 01:36:13.102235 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:36:13.114008 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:36:13.117882 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:36:13.131792 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:36:13.159920 kernel: kvm_amd: TSC scaling supported Dec 13 01:36:13.160173 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:36:13.160189 kernel: kvm_amd: Nested Paging enabled Dec 13 01:36:13.160909 kernel: kvm_amd: LBR virtualization supported Dec 13 01:36:13.160937 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:36:13.161929 kernel: kvm_amd: Virtual GIF supported Dec 13 01:36:13.186334 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:36:13.196900 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:36:13.239408 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:36:13.254036 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:36:13.264918 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:36:13.310572 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:36:13.335460 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:36:13.337025 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:36:13.338545 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:36:13.340178 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:36:13.342198 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:36:13.360546 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:36:13.362129 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:36:13.363694 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:36:13.363730 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:36:13.364880 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:36:13.367306 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:36:13.371202 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:36:13.379670 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:36:13.383099 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:36:13.385681 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:36:13.387335 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:36:13.388867 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:36:13.390615 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:36:13.390668 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:36:13.392532 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:36:13.396987 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:36:13.400040 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:36:13.402732 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:36:13.403394 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:36:13.406174 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:36:13.415320 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:36:13.423469 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:36:13.424371 jq[1442]: false Dec 13 01:36:13.438898 extend-filesystems[1443]: Found loop3 Dec 13 01:36:13.438898 extend-filesystems[1443]: Found loop4 Dec 13 01:36:13.438898 extend-filesystems[1443]: Found loop5 Dec 13 01:36:13.438898 extend-filesystems[1443]: Found sr0 Dec 13 01:36:13.438898 extend-filesystems[1443]: Found vda Dec 13 01:36:13.438898 extend-filesystems[1443]: Found vda1 Dec 13 01:36:13.438898 extend-filesystems[1443]: Found vda2 Dec 13 01:36:13.438898 extend-filesystems[1443]: Found vda3 Dec 13 01:36:13.438898 extend-filesystems[1443]: Found usr Dec 13 01:36:13.438898 extend-filesystems[1443]: Found vda4 Dec 13 01:36:13.438898 extend-filesystems[1443]: Found vda6 Dec 13 01:36:13.438898 extend-filesystems[1443]: Found vda7 Dec 13 01:36:13.438898 extend-filesystems[1443]: Found vda9 Dec 13 01:36:13.438898 extend-filesystems[1443]: Checking size of /dev/vda9 Dec 13 01:36:13.467195 extend-filesystems[1443]: Resized partition /dev/vda9 Dec 13 01:36:13.442819 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:36:13.442031 dbus-daemon[1441]: [system] SELinux support is enabled Dec 13 01:36:13.462674 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:36:13.472570 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:36:13.472606 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:36:13.478389 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:36:13.480572 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:36:13.482018 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:36:13.482141 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1392) Dec 13 01:36:13.482106 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:36:13.489739 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:36:13.492884 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:36:13.498429 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:36:13.501369 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:36:13.502354 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:36:13.502804 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:36:13.503160 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:36:13.503541 jq[1463]: true Dec 13 01:36:13.514620 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:36:13.514969 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:36:13.547095 update_engine[1462]: I20241213 01:36:13.546931 1462 main.cc:92] Flatcar Update Engine starting Dec 13 01:36:13.553854 update_engine[1462]: I20241213 01:36:13.551306 1462 update_check_scheduler.cc:74] Next update check in 5m51s Dec 13 01:36:13.552853 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:36:13.555507 tar[1466]: linux-amd64/helm Dec 13 01:36:13.558246 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:36:13.560401 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:36:13.560453 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:36:13.565184 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:36:13.565239 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:36:13.571121 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:36:13.576701 jq[1468]: true Dec 13 01:36:13.583888 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:36:13.708021 systemd-logind[1459]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:36:13.708066 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:36:13.709573 systemd-logind[1459]: New seat seat0. Dec 13 01:36:13.714017 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:36:13.714017 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:36:13.714017 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:36:13.720999 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Dec 13 01:36:13.716691 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:36:13.720539 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:36:13.724300 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:36:13.761366 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:36:13.769082 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:36:13.772038 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:36:13.780293 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:36:13.938075 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:36:13.988814 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:36:14.027434 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:36:14.037521 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:36:14.037983 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:36:14.063594 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:36:14.139492 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:36:14.152752 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:36:14.158222 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:36:14.159676 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:36:14.429465 containerd[1475]: time="2024-12-13T01:36:14.429332106Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:36:14.459051 containerd[1475]: time="2024-12-13T01:36:14.458951683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:14.462817 containerd[1475]: time="2024-12-13T01:36:14.462698620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:36:14.462817 containerd[1475]: time="2024-12-13T01:36:14.462734978Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:36:14.462817 containerd[1475]: time="2024-12-13T01:36:14.462754104Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:36:14.463139 containerd[1475]: time="2024-12-13T01:36:14.463084904Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:36:14.463139 containerd[1475]: time="2024-12-13T01:36:14.463120501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:14.463316 containerd[1475]: time="2024-12-13T01:36:14.463230618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:36:14.463316 containerd[1475]: time="2024-12-13T01:36:14.463252990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:14.463578 containerd[1475]: time="2024-12-13T01:36:14.463543254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:36:14.463578 containerd[1475]: time="2024-12-13T01:36:14.463566888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:14.463635 containerd[1475]: time="2024-12-13T01:36:14.463580835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:36:14.463635 containerd[1475]: time="2024-12-13T01:36:14.463592677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:14.463758 containerd[1475]: time="2024-12-13T01:36:14.463733371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:14.464095 containerd[1475]: time="2024-12-13T01:36:14.464071144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:14.464222 containerd[1475]: time="2024-12-13T01:36:14.464199334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:36:14.464222 containerd[1475]: time="2024-12-13T01:36:14.464216226Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:36:14.464356 containerd[1475]: time="2024-12-13T01:36:14.464334768Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:36:14.464422 containerd[1475]: time="2024-12-13T01:36:14.464401734Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:36:14.501910 tar[1466]: linux-amd64/LICENSE Dec 13 01:36:14.502082 tar[1466]: linux-amd64/README.md Dec 13 01:36:14.528455 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:36:14.545160 systemd-networkd[1409]: eth0: Gained IPv6LL Dec 13 01:36:14.550636 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:36:14.560103 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:36:14.574317 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:36:14.594276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:14.604603 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:36:14.639291 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:36:14.639664 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:36:14.655803 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:36:14.658456 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:36:14.704412 containerd[1475]: time="2024-12-13T01:36:14.704039550Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:36:14.704412 containerd[1475]: time="2024-12-13T01:36:14.704210671Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:36:14.704412 containerd[1475]: time="2024-12-13T01:36:14.704240557Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:36:14.704412 containerd[1475]: time="2024-12-13T01:36:14.704267678Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:36:14.704412 containerd[1475]: time="2024-12-13T01:36:14.704300881Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:36:14.704723 containerd[1475]: time="2024-12-13T01:36:14.704641479Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:36:14.706909 containerd[1475]: time="2024-12-13T01:36:14.706746637Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:36:14.707388 containerd[1475]: time="2024-12-13T01:36:14.707339870Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:36:14.707490 containerd[1475]: time="2024-12-13T01:36:14.707434427Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:36:14.707692 containerd[1475]: time="2024-12-13T01:36:14.707502525Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:36:14.707737 containerd[1475]: time="2024-12-13T01:36:14.707650112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.707827063Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.707877077Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.707903837Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.707936649Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.707958540Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.707977896Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.707995700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.708033791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.708055632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.708073025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.708093142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.708115835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.708137606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.708352 containerd[1475]: time="2024-12-13T01:36:14.708157092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.708903 containerd[1475]: time="2024-12-13T01:36:14.708177901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.708903 containerd[1475]: time="2024-12-13T01:36:14.708199752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.708903 containerd[1475]: time="2024-12-13T01:36:14.708225420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.708903 containerd[1475]: time="2024-12-13T01:36:14.708244817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.708903 containerd[1475]: time="2024-12-13T01:36:14.708264564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.708903 containerd[1475]: time="2024-12-13T01:36:14.708289230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.708903 containerd[1475]: time="2024-12-13T01:36:14.708317483Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:36:14.709213 containerd[1475]: time="2024-12-13T01:36:14.709169591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.709213 containerd[1475]: time="2024-12-13T01:36:14.709202232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.709213 containerd[1475]: time="2024-12-13T01:36:14.709215527Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:36:14.709213 containerd[1475]: time="2024-12-13T01:36:14.709286841Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:36:14.709213 containerd[1475]: time="2024-12-13T01:36:14.709310706Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:36:14.709213 containerd[1475]: time="2024-12-13T01:36:14.709326686Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:36:14.709213 containerd[1475]: time="2024-12-13T01:36:14.709340842Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:36:14.709213 containerd[1475]: time="2024-12-13T01:36:14.709353526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.709213 containerd[1475]: time="2024-12-13T01:36:14.709370207Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:36:14.709213 containerd[1475]: time="2024-12-13T01:36:14.709390936Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:36:14.709213 containerd[1475]: time="2024-12-13T01:36:14.709402949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:36:14.710039 containerd[1475]: time="2024-12-13T01:36:14.709875896Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:36:14.710039 containerd[1475]: time="2024-12-13T01:36:14.709969912Z" level=info msg="Connect containerd service" Dec 13 01:36:14.710039 containerd[1475]: time="2024-12-13T01:36:14.710022350Z" level=info msg="using legacy CRI server" Dec 13 01:36:14.710039 containerd[1475]: time="2024-12-13T01:36:14.710031538Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:36:14.710325 containerd[1475]: time="2024-12-13T01:36:14.710161842Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:36:14.711109 containerd[1475]: time="2024-12-13T01:36:14.711073692Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:36:14.711392 containerd[1475]: time="2024-12-13T01:36:14.711317229Z" level=info msg="Start subscribing containerd event" Dec 13 01:36:14.711446 containerd[1475]: time="2024-12-13T01:36:14.711426143Z" level=info msg="Start recovering state" Dec 13 01:36:14.711982 containerd[1475]: time="2024-12-13T01:36:14.711537161Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:36:14.711982 containerd[1475]: time="2024-12-13T01:36:14.711544565Z" level=info msg="Start event monitor" Dec 13 01:36:14.711982 containerd[1475]: time="2024-12-13T01:36:14.711613604Z" level=info msg="Start snapshots syncer" Dec 13 01:36:14.711982 containerd[1475]: time="2024-12-13T01:36:14.711620357Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:36:14.711982 containerd[1475]: time="2024-12-13T01:36:14.711630757Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:36:14.711982 containerd[1475]: time="2024-12-13T01:36:14.711677544Z" level=info msg="Start streaming server" Dec 13 01:36:14.711982 containerd[1475]: time="2024-12-13T01:36:14.711769046Z" level=info msg="containerd successfully booted in 0.283871s" Dec 13 01:36:14.711969 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:36:15.889417 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:36:15.923267 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:34856.service - OpenSSH per-connection server daemon (10.0.0.1:34856). Dec 13 01:36:16.037712 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 34856 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:16.043130 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:16.064043 systemd-logind[1459]: New session 1 of user core. Dec 13 01:36:16.065116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:16.069540 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:36:16.070786 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:36:16.087352 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:36:16.087363 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:36:16.117349 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:36:16.142305 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:36:16.149107 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:36:16.315514 systemd[1560]: Queued start job for default target default.target. Dec 13 01:36:16.327667 systemd[1560]: Created slice app.slice - User Application Slice. Dec 13 01:36:16.327703 systemd[1560]: Reached target paths.target - Paths. Dec 13 01:36:16.327718 systemd[1560]: Reached target timers.target - Timers. Dec 13 01:36:16.330056 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:36:16.349188 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:36:16.349388 systemd[1560]: Reached target sockets.target - Sockets. Dec 13 01:36:16.349409 systemd[1560]: Reached target basic.target - Basic System. Dec 13 01:36:16.349471 systemd[1560]: Reached target default.target - Main User Target. Dec 13 01:36:16.349516 systemd[1560]: Startup finished in 190ms. Dec 13 01:36:16.350368 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:36:16.353590 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:36:16.357308 systemd[1]: Startup finished in 1.316s (kernel) + 7.066s (initrd) + 6.682s (userspace) = 15.065s. Dec 13 01:36:16.458360 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:39932.service - OpenSSH per-connection server daemon (10.0.0.1:39932). Dec 13 01:36:16.501962 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 39932 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:16.504930 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:16.512938 systemd-logind[1459]: New session 2 of user core. Dec 13 01:36:16.542253 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:36:16.611139 sshd[1580]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:16.621527 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:39932.service: Deactivated successfully. Dec 13 01:36:16.624106 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:36:16.626318 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:36:16.636202 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:39948.service - OpenSSH per-connection server daemon (10.0.0.1:39948). Dec 13 01:36:16.637426 systemd-logind[1459]: Removed session 2. Dec 13 01:36:16.672184 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 39948 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:16.674103 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:16.678639 systemd-logind[1459]: New session 3 of user core. Dec 13 01:36:16.693026 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:36:16.746399 sshd[1588]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:16.763278 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:39948.service: Deactivated successfully. Dec 13 01:36:16.765379 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:36:16.765484 kubelet[1556]: E1213 01:36:16.765431 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:36:16.767249 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:36:16.773245 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:39964.service - OpenSSH per-connection server daemon (10.0.0.1:39964). Dec 13 01:36:16.773762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:36:16.774015 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:36:16.774354 systemd[1]: kubelet.service: Consumed 1.867s CPU time. Dec 13 01:36:16.777651 systemd-logind[1459]: Removed session 3. Dec 13 01:36:16.808321 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 39964 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:16.810059 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:16.814021 systemd-logind[1459]: New session 4 of user core. Dec 13 01:36:16.823964 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:36:16.880485 sshd[1595]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:16.890914 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:39964.service: Deactivated successfully. Dec 13 01:36:16.892883 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:36:16.894471 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:36:16.907173 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:39966.service - OpenSSH per-connection server daemon (10.0.0.1:39966). Dec 13 01:36:16.908256 systemd-logind[1459]: Removed session 4. Dec 13 01:36:16.942552 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 39966 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:16.944825 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:16.949504 systemd-logind[1459]: New session 5 of user core. Dec 13 01:36:16.958995 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:36:17.026943 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:36:17.027377 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:36:17.778338 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:36:17.778462 (dockerd)[1624]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:36:18.333595 dockerd[1624]: time="2024-12-13T01:36:18.333492450Z" level=info msg="Starting up" Dec 13 01:36:19.826398 dockerd[1624]: time="2024-12-13T01:36:19.826327018Z" level=info msg="Loading containers: start." Dec 13 01:36:20.073910 kernel: Initializing XFRM netlink socket Dec 13 01:36:20.167077 systemd-networkd[1409]: docker0: Link UP Dec 13 01:36:20.194496 dockerd[1624]: time="2024-12-13T01:36:20.194447655Z" level=info msg="Loading containers: done." Dec 13 01:36:20.212321 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2434967395-merged.mount: Deactivated successfully. Dec 13 01:36:20.214180 dockerd[1624]: time="2024-12-13T01:36:20.214140456Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:36:20.214261 dockerd[1624]: time="2024-12-13T01:36:20.214242498Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:36:20.214391 dockerd[1624]: time="2024-12-13T01:36:20.214363705Z" level=info msg="Daemon has completed initialization" Dec 13 01:36:20.260859 dockerd[1624]: time="2024-12-13T01:36:20.260683101Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:36:20.261064 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:36:21.434485 containerd[1475]: time="2024-12-13T01:36:21.434432880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 01:36:22.721739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2595002945.mount: Deactivated successfully. Dec 13 01:36:24.591420 containerd[1475]: time="2024-12-13T01:36:24.591326142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:24.593198 containerd[1475]: time="2024-12-13T01:36:24.593111260Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Dec 13 01:36:24.595092 containerd[1475]: time="2024-12-13T01:36:24.595022314Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:24.598797 containerd[1475]: time="2024-12-13T01:36:24.598730648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:24.599825 containerd[1475]: time="2024-12-13T01:36:24.599799813Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 3.16531211s" Dec 13 01:36:24.599909 containerd[1475]: time="2024-12-13T01:36:24.599855778Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 01:36:24.601741 containerd[1475]: time="2024-12-13T01:36:24.601704475Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 01:36:26.892486 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:36:26.907190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:27.113334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:27.119370 (kubelet)[1836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:36:27.230056 containerd[1475]: time="2024-12-13T01:36:27.229882687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:27.231306 containerd[1475]: time="2024-12-13T01:36:27.231231808Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Dec 13 01:36:27.232960 containerd[1475]: time="2024-12-13T01:36:27.232907340Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:27.236747 containerd[1475]: time="2024-12-13T01:36:27.236700323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:27.238002 containerd[1475]: time="2024-12-13T01:36:27.237936341Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 2.636190368s" Dec 13 01:36:27.238002 containerd[1475]: time="2024-12-13T01:36:27.237997245Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 01:36:27.238903 containerd[1475]: time="2024-12-13T01:36:27.238864492Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 01:36:27.253237 kubelet[1836]: E1213 01:36:27.253138 1836 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:36:27.261541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:36:27.261770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:36:28.888031 containerd[1475]: time="2024-12-13T01:36:28.887953943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:28.932680 containerd[1475]: time="2024-12-13T01:36:28.932548303Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Dec 13 01:36:28.986340 containerd[1475]: time="2024-12-13T01:36:28.986271273Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:29.036812 containerd[1475]: time="2024-12-13T01:36:29.036701327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:29.038366 containerd[1475]: time="2024-12-13T01:36:29.038300947Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.79939067s" Dec 13 01:36:29.038366 containerd[1475]: time="2024-12-13T01:36:29.038358285Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 01:36:29.039163 containerd[1475]: time="2024-12-13T01:36:29.039137867Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:36:30.234081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3546972893.mount: Deactivated successfully. Dec 13 01:36:31.427589 containerd[1475]: time="2024-12-13T01:36:31.427415750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:31.430104 containerd[1475]: time="2024-12-13T01:36:31.430002972Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Dec 13 01:36:31.432894 containerd[1475]: time="2024-12-13T01:36:31.432785561Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:31.436018 containerd[1475]: time="2024-12-13T01:36:31.435905161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:31.436797 containerd[1475]: time="2024-12-13T01:36:31.436727994Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.397549511s" Dec 13 01:36:31.436797 containerd[1475]: time="2024-12-13T01:36:31.436792986Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 01:36:31.438038 containerd[1475]: time="2024-12-13T01:36:31.437957870Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:36:32.018375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount412234355.mount: Deactivated successfully. Dec 13 01:36:33.590079 containerd[1475]: time="2024-12-13T01:36:33.589993896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:33.591214 containerd[1475]: time="2024-12-13T01:36:33.591160394Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:36:33.592782 containerd[1475]: time="2024-12-13T01:36:33.592735458Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:33.596360 containerd[1475]: time="2024-12-13T01:36:33.596319228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:33.597595 containerd[1475]: time="2024-12-13T01:36:33.597534538Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.15952995s" Dec 13 01:36:33.597595 containerd[1475]: time="2024-12-13T01:36:33.597583459Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:36:33.598281 containerd[1475]: time="2024-12-13T01:36:33.598253877Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 01:36:34.170556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1162634835.mount: Deactivated successfully. Dec 13 01:36:34.178450 containerd[1475]: time="2024-12-13T01:36:34.178381413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:34.179579 containerd[1475]: time="2024-12-13T01:36:34.179454956Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 13 01:36:34.181084 containerd[1475]: time="2024-12-13T01:36:34.181041942Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:34.183473 containerd[1475]: time="2024-12-13T01:36:34.183412718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:34.184144 containerd[1475]: time="2024-12-13T01:36:34.184086021Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 585.799172ms" Dec 13 01:36:34.184144 containerd[1475]: time="2024-12-13T01:36:34.184134081Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 01:36:34.184762 containerd[1475]: time="2024-12-13T01:36:34.184724599Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 01:36:34.875612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount221162621.mount: Deactivated successfully. Dec 13 01:36:37.268526 containerd[1475]: time="2024-12-13T01:36:37.268302548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:37.273067 containerd[1475]: time="2024-12-13T01:36:37.272160222Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Dec 13 01:36:37.276915 containerd[1475]: time="2024-12-13T01:36:37.276799192Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:37.284447 containerd[1475]: time="2024-12-13T01:36:37.284347087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:37.287424 containerd[1475]: time="2024-12-13T01:36:37.287025290Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.102255576s" Dec 13 01:36:37.287424 containerd[1475]: time="2024-12-13T01:36:37.287086705Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 01:36:37.392571 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:36:37.405328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:37.589461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:37.596142 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:36:37.639490 kubelet[1976]: E1213 01:36:37.639395 1976 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:36:37.643896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:36:37.644154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:36:39.836870 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:39.847106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:39.878188 systemd[1]: Reloading requested from client PID 2006 ('systemctl') (unit session-5.scope)... Dec 13 01:36:39.878231 systemd[1]: Reloading... Dec 13 01:36:39.974905 zram_generator::config[2051]: No configuration found. Dec 13 01:36:40.473020 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:36:40.564100 systemd[1]: Reloading finished in 685 ms. Dec 13 01:36:40.617826 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:40.622278 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:36:40.622629 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:40.625014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:40.803398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:40.809242 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:36:40.970640 kubelet[2095]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:36:40.970640 kubelet[2095]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:36:40.970640 kubelet[2095]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:36:40.971250 kubelet[2095]: I1213 01:36:40.970968 2095 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:36:41.245639 kubelet[2095]: I1213 01:36:41.245562 2095 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:36:41.245639 kubelet[2095]: I1213 01:36:41.245608 2095 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:36:41.245917 kubelet[2095]: I1213 01:36:41.245892 2095 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:36:41.283657 kubelet[2095]: I1213 01:36:41.283579 2095 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:36:41.284120 kubelet[2095]: E1213 01:36:41.284042 2095 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:36:41.291300 kubelet[2095]: E1213 01:36:41.291260 2095 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:36:41.291300 kubelet[2095]: I1213 01:36:41.291296 2095 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:36:41.299139 kubelet[2095]: I1213 01:36:41.299082 2095 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:36:41.300461 kubelet[2095]: I1213 01:36:41.300421 2095 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:36:41.300653 kubelet[2095]: I1213 01:36:41.300613 2095 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:36:41.300823 kubelet[2095]: I1213 01:36:41.300644 2095 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:36:41.300983 kubelet[2095]: I1213 01:36:41.300828 2095 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:36:41.300983 kubelet[2095]: I1213 01:36:41.300853 2095 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:36:41.301061 kubelet[2095]: I1213 01:36:41.300999 2095 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:36:41.303049 kubelet[2095]: I1213 01:36:41.303017 2095 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:36:41.303049 kubelet[2095]: I1213 01:36:41.303044 2095 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:36:41.303138 kubelet[2095]: I1213 01:36:41.303124 2095 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:36:41.303185 kubelet[2095]: I1213 01:36:41.303144 2095 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:36:41.308471 kubelet[2095]: W1213 01:36:41.308337 2095 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Dec 13 01:36:41.308471 kubelet[2095]: E1213 01:36:41.308422 2095 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:36:41.309564 kubelet[2095]: I1213 01:36:41.309539 2095 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:36:41.310239 kubelet[2095]: W1213 01:36:41.310183 2095 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Dec 13 01:36:41.310286 kubelet[2095]: E1213 01:36:41.310253 2095 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:36:41.312370 kubelet[2095]: I1213 01:36:41.312338 2095 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:36:41.313871 kubelet[2095]: W1213 01:36:41.313824 2095 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:36:41.314904 kubelet[2095]: I1213 01:36:41.314820 2095 server.go:1269] "Started kubelet" Dec 13 01:36:41.316828 kubelet[2095]: I1213 01:36:41.315487 2095 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:36:41.316828 kubelet[2095]: I1213 01:36:41.315933 2095 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:36:41.316828 kubelet[2095]: I1213 01:36:41.316215 2095 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:36:41.316828 kubelet[2095]: I1213 01:36:41.316547 2095 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:36:41.317263 kubelet[2095]: I1213 01:36:41.317229 2095 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:36:41.317807 kubelet[2095]: I1213 01:36:41.317790 2095 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:36:41.319903 kubelet[2095]: E1213 01:36:41.319755 2095 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:41.319903 kubelet[2095]: E1213 01:36:41.319848 2095 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" Dec 13 01:36:41.319903 kubelet[2095]: I1213 01:36:41.319904 2095 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:36:41.320053 kubelet[2095]: I1213 01:36:41.319992 2095 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:36:41.320053 kubelet[2095]: I1213 01:36:41.320043 2095 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:36:41.320382 kubelet[2095]: W1213 01:36:41.320295 2095 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Dec 13 01:36:41.320382 kubelet[2095]: E1213 01:36:41.320338 2095 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:36:41.320621 kubelet[2095]: I1213 01:36:41.320596 2095 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:36:41.320711 kubelet[2095]: I1213 01:36:41.320690 2095 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:36:41.322044 kubelet[2095]: E1213 01:36:41.322015 2095 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:36:41.322731 kubelet[2095]: I1213 01:36:41.322705 2095 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:36:41.359915 kubelet[2095]: E1213 01:36:41.319411 2095 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181098beb0c89e5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:36:41.31478691 +0000 UTC m=+0.469726857,LastTimestamp:2024-12-13 01:36:41.31478691 +0000 UTC m=+0.469726857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:36:41.378258 kubelet[2095]: I1213 01:36:41.378175 2095 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:36:41.380816 kubelet[2095]: I1213 01:36:41.380786 2095 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:36:41.380942 kubelet[2095]: I1213 01:36:41.380853 2095 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:36:41.380942 kubelet[2095]: I1213 01:36:41.380890 2095 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:36:41.381019 kubelet[2095]: E1213 01:36:41.380947 2095 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:36:41.383232 kubelet[2095]: W1213 01:36:41.383132 2095 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Dec 13 01:36:41.383232 kubelet[2095]: E1213 01:36:41.383194 2095 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:36:41.384429 kubelet[2095]: I1213 01:36:41.384374 2095 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:36:41.384429 kubelet[2095]: I1213 01:36:41.384423 2095 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:36:41.384559 kubelet[2095]: I1213 01:36:41.384447 2095 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:36:41.389538 kubelet[2095]: I1213 01:36:41.389505 2095 policy_none.go:49] "None policy: Start" Dec 13 01:36:41.390240 kubelet[2095]: I1213 01:36:41.390188 2095 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:36:41.390240 kubelet[2095]: I1213 01:36:41.390232 2095 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:36:41.402518 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:36:41.414954 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:36:41.420321 kubelet[2095]: E1213 01:36:41.420286 2095 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:41.425341 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:36:41.427170 kubelet[2095]: I1213 01:36:41.427125 2095 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:36:41.427429 kubelet[2095]: I1213 01:36:41.427411 2095 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:36:41.427483 kubelet[2095]: I1213 01:36:41.427436 2095 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:36:41.427825 kubelet[2095]: I1213 01:36:41.427753 2095 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:36:41.428895 kubelet[2095]: E1213 01:36:41.428867 2095 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:36:41.492263 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Dec 13 01:36:41.507941 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Dec 13 01:36:41.518436 systemd[1]: Created slice kubepods-burstable-podb46eba1646e3353dc2588e9857bb0e79.slice - libcontainer container kubepods-burstable-podb46eba1646e3353dc2588e9857bb0e79.slice. Dec 13 01:36:41.520943 kubelet[2095]: E1213 01:36:41.520895 2095 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" Dec 13 01:36:41.529530 kubelet[2095]: I1213 01:36:41.529475 2095 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:36:41.530015 kubelet[2095]: E1213 01:36:41.529967 2095 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Dec 13 01:36:41.621446 kubelet[2095]: I1213 01:36:41.621364 2095 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:41.621446 kubelet[2095]: I1213 01:36:41.621431 2095 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:41.621446 kubelet[2095]: I1213 01:36:41.621459 2095 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b46eba1646e3353dc2588e9857bb0e79-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b46eba1646e3353dc2588e9857bb0e79\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:41.621446 kubelet[2095]: I1213 01:36:41.621477 2095 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:41.621876 kubelet[2095]: I1213 01:36:41.621530 2095 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:41.621876 kubelet[2095]: I1213 01:36:41.621615 2095 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:41.621876 kubelet[2095]: I1213 01:36:41.621695 2095 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:36:41.621876 kubelet[2095]: I1213 01:36:41.621728 2095 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b46eba1646e3353dc2588e9857bb0e79-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b46eba1646e3353dc2588e9857bb0e79\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:41.621876 kubelet[2095]: I1213 01:36:41.621755 2095 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b46eba1646e3353dc2588e9857bb0e79-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b46eba1646e3353dc2588e9857bb0e79\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:41.732465 kubelet[2095]: I1213 01:36:41.732404 2095 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:36:41.733021 kubelet[2095]: E1213 01:36:41.732959 2095 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Dec 13 01:36:41.806777 kubelet[2095]: E1213 01:36:41.806590 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:41.807850 containerd[1475]: time="2024-12-13T01:36:41.807762516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:41.816229 kubelet[2095]: E1213 01:36:41.816179 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:41.816854 containerd[1475]: time="2024-12-13T01:36:41.816774597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:41.821218 kubelet[2095]: E1213 01:36:41.821181 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:41.821766 containerd[1475]: time="2024-12-13T01:36:41.821734558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b46eba1646e3353dc2588e9857bb0e79,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:41.922221 kubelet[2095]: E1213 01:36:41.922101 2095 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" Dec 13 01:36:42.134820 kubelet[2095]: I1213 01:36:42.134665 2095 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:36:42.135403 kubelet[2095]: E1213 01:36:42.135036 2095 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Dec 13 01:36:42.680683 kubelet[2095]: W1213 01:36:42.680593 2095 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Dec 13 01:36:42.680683 kubelet[2095]: E1213 01:36:42.680681 2095 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:36:42.723952 kubelet[2095]: E1213 01:36:42.723821 2095 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="1.6s" Dec 13 01:36:42.755749 kubelet[2095]: W1213 01:36:42.755634 2095 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Dec 13 01:36:42.755749 kubelet[2095]: E1213 01:36:42.755742 2095 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:36:42.791660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount497060368.mount: Deactivated successfully. Dec 13 01:36:42.827684 kubelet[2095]: W1213 01:36:42.827632 2095 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Dec 13 01:36:42.827888 kubelet[2095]: E1213 01:36:42.827695 2095 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:36:42.829588 containerd[1475]: time="2024-12-13T01:36:42.829506220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:36:42.830417 containerd[1475]: time="2024-12-13T01:36:42.830357797Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:36:42.831466 containerd[1475]: time="2024-12-13T01:36:42.831425670Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:36:42.832863 containerd[1475]: time="2024-12-13T01:36:42.832795428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:36:42.836085 containerd[1475]: time="2024-12-13T01:36:42.836007362Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:36:42.837548 containerd[1475]: time="2024-12-13T01:36:42.837509749Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:36:42.838452 containerd[1475]: time="2024-12-13T01:36:42.838374541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:36:42.840608 containerd[1475]: time="2024-12-13T01:36:42.840558367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:36:42.842707 containerd[1475]: time="2024-12-13T01:36:42.842650089Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.034731521s" Dec 13 01:36:42.846185 containerd[1475]: time="2024-12-13T01:36:42.846126359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.024309156s" Dec 13 01:36:42.847346 containerd[1475]: time="2024-12-13T01:36:42.847303546Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.030407552s" Dec 13 01:36:42.888002 kubelet[2095]: W1213 01:36:42.887900 2095 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Dec 13 01:36:42.888142 kubelet[2095]: E1213 01:36:42.888020 2095 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:36:42.936620 kubelet[2095]: I1213 01:36:42.936460 2095 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:36:42.936844 kubelet[2095]: E1213 01:36:42.936801 2095 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Dec 13 01:36:43.168089 containerd[1475]: time="2024-12-13T01:36:43.167975471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:43.168089 containerd[1475]: time="2024-12-13T01:36:43.168043558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:43.168089 containerd[1475]: time="2024-12-13T01:36:43.168059137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:43.168349 containerd[1475]: time="2024-12-13T01:36:43.168231330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:43.178144 containerd[1475]: time="2024-12-13T01:36:43.177354760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:43.178144 containerd[1475]: time="2024-12-13T01:36:43.177445079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:43.178144 containerd[1475]: time="2024-12-13T01:36:43.177463113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:43.178144 containerd[1475]: time="2024-12-13T01:36:43.177735644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:43.198028 containerd[1475]: time="2024-12-13T01:36:43.195274857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:43.198028 containerd[1475]: time="2024-12-13T01:36:43.195372710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:43.198028 containerd[1475]: time="2024-12-13T01:36:43.195446869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:43.198028 containerd[1475]: time="2024-12-13T01:36:43.196225760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:43.210172 systemd[1]: Started cri-containerd-ae7e5a0d61ec44aec1b14a94f9ba5cd51883674e2f3aa8425d257b02d3e6b7c9.scope - libcontainer container ae7e5a0d61ec44aec1b14a94f9ba5cd51883674e2f3aa8425d257b02d3e6b7c9. Dec 13 01:36:43.219614 systemd[1]: Started cri-containerd-edf0aea5506cba196de0b2c12ceff4739b8549e418d5fb68232344cdb81c3073.scope - libcontainer container edf0aea5506cba196de0b2c12ceff4739b8549e418d5fb68232344cdb81c3073. Dec 13 01:36:43.227062 systemd[1]: Started cri-containerd-d13625b43a70a2a9fe60ca79cdc68aaf3c13f26021a1b00f7b35eee819303c56.scope - libcontainer container d13625b43a70a2a9fe60ca79cdc68aaf3c13f26021a1b00f7b35eee819303c56. Dec 13 01:36:43.368506 containerd[1475]: time="2024-12-13T01:36:43.368413273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae7e5a0d61ec44aec1b14a94f9ba5cd51883674e2f3aa8425d257b02d3e6b7c9\"" Dec 13 01:36:43.370235 kubelet[2095]: E1213 01:36:43.370208 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:43.371795 containerd[1475]: time="2024-12-13T01:36:43.371691761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b46eba1646e3353dc2588e9857bb0e79,Namespace:kube-system,Attempt:0,} returns sandbox id \"d13625b43a70a2a9fe60ca79cdc68aaf3c13f26021a1b00f7b35eee819303c56\"" Dec 13 01:36:43.372498 kubelet[2095]: E1213 01:36:43.372476 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:43.376436 containerd[1475]: time="2024-12-13T01:36:43.376303720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"edf0aea5506cba196de0b2c12ceff4739b8549e418d5fb68232344cdb81c3073\"" Dec 13 01:36:43.376588 containerd[1475]: time="2024-12-13T01:36:43.376550563Z" level=info msg="CreateContainer within sandbox \"ae7e5a0d61ec44aec1b14a94f9ba5cd51883674e2f3aa8425d257b02d3e6b7c9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:36:43.376772 containerd[1475]: time="2024-12-13T01:36:43.376577063Z" level=info msg="CreateContainer within sandbox \"d13625b43a70a2a9fe60ca79cdc68aaf3c13f26021a1b00f7b35eee819303c56\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:36:43.377084 kubelet[2095]: E1213 01:36:43.377053 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:43.378818 containerd[1475]: time="2024-12-13T01:36:43.378791265Z" level=info msg="CreateContainer within sandbox \"edf0aea5506cba196de0b2c12ceff4739b8549e418d5fb68232344cdb81c3073\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:36:43.395069 kubelet[2095]: E1213 01:36:43.395024 2095 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:36:43.558406 containerd[1475]: time="2024-12-13T01:36:43.558224617Z" level=info msg="CreateContainer within sandbox \"d13625b43a70a2a9fe60ca79cdc68aaf3c13f26021a1b00f7b35eee819303c56\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"20fb88c381c8c2bf39e6c2712f50ec49d24fba746a92464b542827b0d745ff25\"" Dec 13 01:36:43.559304 containerd[1475]: time="2024-12-13T01:36:43.559125506Z" level=info msg="StartContainer for \"20fb88c381c8c2bf39e6c2712f50ec49d24fba746a92464b542827b0d745ff25\"" Dec 13 01:36:43.567113 containerd[1475]: time="2024-12-13T01:36:43.567037574Z" level=info msg="CreateContainer within sandbox \"ae7e5a0d61ec44aec1b14a94f9ba5cd51883674e2f3aa8425d257b02d3e6b7c9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1592ade86a4d91c001bbdd29a68facc68b927d79f467fa029276e011ce34bff4\"" Dec 13 01:36:43.567911 containerd[1475]: time="2024-12-13T01:36:43.567861520Z" level=info msg="StartContainer for \"1592ade86a4d91c001bbdd29a68facc68b927d79f467fa029276e011ce34bff4\"" Dec 13 01:36:43.573433 containerd[1475]: time="2024-12-13T01:36:43.573358007Z" level=info msg="CreateContainer within sandbox \"edf0aea5506cba196de0b2c12ceff4739b8549e418d5fb68232344cdb81c3073\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f6a87db80484356ca3d48b0c7cc9d6cb56b82cbc3b3ff4045d5b8a46a4b4f96f\"" Dec 13 01:36:43.575200 containerd[1475]: time="2024-12-13T01:36:43.574036871Z" level=info msg="StartContainer for \"f6a87db80484356ca3d48b0c7cc9d6cb56b82cbc3b3ff4045d5b8a46a4b4f96f\"" Dec 13 01:36:43.598745 systemd[1]: Started cri-containerd-20fb88c381c8c2bf39e6c2712f50ec49d24fba746a92464b542827b0d745ff25.scope - libcontainer container 20fb88c381c8c2bf39e6c2712f50ec49d24fba746a92464b542827b0d745ff25. Dec 13 01:36:43.622120 systemd[1]: Started cri-containerd-1592ade86a4d91c001bbdd29a68facc68b927d79f467fa029276e011ce34bff4.scope - libcontainer container 1592ade86a4d91c001bbdd29a68facc68b927d79f467fa029276e011ce34bff4. Dec 13 01:36:43.624615 systemd[1]: Started cri-containerd-f6a87db80484356ca3d48b0c7cc9d6cb56b82cbc3b3ff4045d5b8a46a4b4f96f.scope - libcontainer container f6a87db80484356ca3d48b0c7cc9d6cb56b82cbc3b3ff4045d5b8a46a4b4f96f. Dec 13 01:36:43.689583 containerd[1475]: time="2024-12-13T01:36:43.689278179Z" level=info msg="StartContainer for \"20fb88c381c8c2bf39e6c2712f50ec49d24fba746a92464b542827b0d745ff25\" returns successfully" Dec 13 01:36:43.689583 containerd[1475]: time="2024-12-13T01:36:43.689482963Z" level=info msg="StartContainer for \"1592ade86a4d91c001bbdd29a68facc68b927d79f467fa029276e011ce34bff4\" returns successfully" Dec 13 01:36:43.689583 containerd[1475]: time="2024-12-13T01:36:43.689524752Z" level=info msg="StartContainer for \"f6a87db80484356ca3d48b0c7cc9d6cb56b82cbc3b3ff4045d5b8a46a4b4f96f\" returns successfully" Dec 13 01:36:44.396532 kubelet[2095]: E1213 01:36:44.396465 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:44.399049 kubelet[2095]: E1213 01:36:44.398950 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:44.401333 kubelet[2095]: E1213 01:36:44.401302 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:44.538908 kubelet[2095]: I1213 01:36:44.538858 2095 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:36:45.372785 kubelet[2095]: I1213 01:36:45.372736 2095 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 01:36:45.373134 kubelet[2095]: E1213 01:36:45.372867 2095 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 13 01:36:45.390214 kubelet[2095]: E1213 01:36:45.390159 2095 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:45.403368 kubelet[2095]: E1213 01:36:45.403312 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:45.403368 kubelet[2095]: E1213 01:36:45.403342 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:45.403890 kubelet[2095]: E1213 01:36:45.403572 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:45.425253 kubelet[2095]: E1213 01:36:45.425182 2095 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Dec 13 01:36:45.490860 kubelet[2095]: E1213 01:36:45.490780 2095 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:45.591772 kubelet[2095]: E1213 01:36:45.591710 2095 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:45.692499 kubelet[2095]: E1213 01:36:45.692428 2095 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:46.305318 kubelet[2095]: I1213 01:36:46.305256 2095 apiserver.go:52] "Watching apiserver" Dec 13 01:36:46.320621 kubelet[2095]: I1213 01:36:46.320559 2095 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:36:46.414916 kubelet[2095]: E1213 01:36:46.414859 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:46.419684 kubelet[2095]: E1213 01:36:46.419617 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:47.405774 kubelet[2095]: E1213 01:36:47.405720 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:47.405774 kubelet[2095]: E1213 01:36:47.405740 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:48.040423 systemd[1]: Reloading requested from client PID 2381 ('systemctl') (unit session-5.scope)... Dec 13 01:36:48.040449 systemd[1]: Reloading... Dec 13 01:36:48.132882 zram_generator::config[2420]: No configuration found. Dec 13 01:36:48.263173 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:36:48.356147 systemd[1]: Reloading finished in 315 ms. Dec 13 01:36:48.405409 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:48.416931 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:36:48.417270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:48.417336 systemd[1]: kubelet.service: Consumed 1.297s CPU time, 121.2M memory peak, 0B memory swap peak. Dec 13 01:36:48.427396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:48.586654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:48.592631 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:36:48.640743 kubelet[2465]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:36:48.640743 kubelet[2465]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:36:48.640743 kubelet[2465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:36:48.641214 kubelet[2465]: I1213 01:36:48.640713 2465 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:36:48.648894 kubelet[2465]: I1213 01:36:48.648819 2465 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:36:48.649210 kubelet[2465]: I1213 01:36:48.649020 2465 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:36:48.649880 kubelet[2465]: I1213 01:36:48.649611 2465 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:36:48.650881 kubelet[2465]: I1213 01:36:48.650853 2465 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:36:48.652895 kubelet[2465]: I1213 01:36:48.652792 2465 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:36:48.655731 kubelet[2465]: E1213 01:36:48.655704 2465 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:36:48.655731 kubelet[2465]: I1213 01:36:48.655727 2465 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:36:48.660604 kubelet[2465]: I1213 01:36:48.660578 2465 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:36:48.660712 kubelet[2465]: I1213 01:36:48.660695 2465 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:36:48.660863 kubelet[2465]: I1213 01:36:48.660819 2465 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:36:48.661014 kubelet[2465]: I1213 01:36:48.660864 2465 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:36:48.661119 kubelet[2465]: I1213 01:36:48.661020 2465 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:36:48.661119 kubelet[2465]: I1213 01:36:48.661029 2465 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:36:48.661119 kubelet[2465]: I1213 01:36:48.661065 2465 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:36:48.661189 kubelet[2465]: I1213 01:36:48.661176 2465 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:36:48.661189 kubelet[2465]: I1213 01:36:48.661187 2465 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:36:48.661241 kubelet[2465]: I1213 01:36:48.661215 2465 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:36:48.661241 kubelet[2465]: I1213 01:36:48.661226 2465 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:36:48.662256 kubelet[2465]: I1213 01:36:48.662231 2465 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:36:48.662665 kubelet[2465]: I1213 01:36:48.662635 2465 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:36:48.665871 kubelet[2465]: I1213 01:36:48.663163 2465 server.go:1269] "Started kubelet" Dec 13 01:36:48.665871 kubelet[2465]: I1213 01:36:48.665355 2465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:36:48.666657 kubelet[2465]: I1213 01:36:48.666625 2465 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:36:48.666748 kubelet[2465]: I1213 01:36:48.666720 2465 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:36:48.666789 kubelet[2465]: I1213 01:36:48.666756 2465 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:36:48.667845 kubelet[2465]: I1213 01:36:48.667795 2465 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:36:48.669106 kubelet[2465]: I1213 01:36:48.669074 2465 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:36:48.669250 kubelet[2465]: I1213 01:36:48.669233 2465 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:36:48.669617 kubelet[2465]: E1213 01:36:48.669598 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:48.671736 kubelet[2465]: I1213 01:36:48.669772 2465 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:36:48.672487 kubelet[2465]: I1213 01:36:48.671943 2465 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:36:48.673517 kubelet[2465]: I1213 01:36:48.673070 2465 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:36:48.673635 kubelet[2465]: I1213 01:36:48.673608 2465 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:36:48.676368 kubelet[2465]: E1213 01:36:48.676318 2465 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:36:48.676632 kubelet[2465]: I1213 01:36:48.676602 2465 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:36:48.683581 kubelet[2465]: I1213 01:36:48.683547 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:36:48.685016 kubelet[2465]: I1213 01:36:48.685001 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:36:48.685119 kubelet[2465]: I1213 01:36:48.685108 2465 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:36:48.685189 kubelet[2465]: I1213 01:36:48.685179 2465 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:36:48.685303 kubelet[2465]: E1213 01:36:48.685284 2465 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:36:48.715998 kubelet[2465]: I1213 01:36:48.715969 2465 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:36:48.715998 kubelet[2465]: I1213 01:36:48.715986 2465 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:36:48.715998 kubelet[2465]: I1213 01:36:48.716005 2465 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:36:48.716197 kubelet[2465]: I1213 01:36:48.716169 2465 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:36:48.716197 kubelet[2465]: I1213 01:36:48.716179 2465 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:36:48.716197 kubelet[2465]: I1213 01:36:48.716197 2465 policy_none.go:49] "None policy: Start" Dec 13 01:36:48.717027 kubelet[2465]: I1213 01:36:48.716991 2465 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:36:48.717084 kubelet[2465]: I1213 01:36:48.717032 2465 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:36:48.717390 kubelet[2465]: I1213 01:36:48.717368 2465 state_mem.go:75] "Updated machine memory state" Dec 13 01:36:48.724376 kubelet[2465]: I1213 01:36:48.724338 2465 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:36:48.724674 kubelet[2465]: I1213 01:36:48.724572 2465 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:36:48.724674 kubelet[2465]: I1213 01:36:48.724597 2465 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:36:48.724871 kubelet[2465]: I1213 01:36:48.724863 2465 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:36:48.830147 kubelet[2465]: I1213 01:36:48.830088 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:36:48.870568 kubelet[2465]: I1213 01:36:48.870524 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:48.870568 kubelet[2465]: I1213 01:36:48.870559 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:48.870568 kubelet[2465]: I1213 01:36:48.870580 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:36:48.870873 kubelet[2465]: I1213 01:36:48.870596 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:48.870873 kubelet[2465]: I1213 01:36:48.870614 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b46eba1646e3353dc2588e9857bb0e79-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b46eba1646e3353dc2588e9857bb0e79\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:48.870873 kubelet[2465]: I1213 01:36:48.870629 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b46eba1646e3353dc2588e9857bb0e79-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b46eba1646e3353dc2588e9857bb0e79\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:48.870873 kubelet[2465]: I1213 01:36:48.870653 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b46eba1646e3353dc2588e9857bb0e79-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b46eba1646e3353dc2588e9857bb0e79\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:48.870873 kubelet[2465]: I1213 01:36:48.870717 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:48.871001 kubelet[2465]: I1213 01:36:48.870758 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:48.938594 kubelet[2465]: E1213 01:36:48.938529 2465 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:48.940187 kubelet[2465]: E1213 01:36:48.939879 2465 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:48.941719 kubelet[2465]: I1213 01:36:48.941614 2465 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Dec 13 01:36:48.941719 kubelet[2465]: I1213 01:36:48.941712 2465 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 01:36:49.218090 kubelet[2465]: E1213 01:36:49.217943 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:49.239539 kubelet[2465]: E1213 01:36:49.239392 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:49.240809 kubelet[2465]: E1213 01:36:49.240777 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:49.661764 kubelet[2465]: I1213 01:36:49.661687 2465 apiserver.go:52] "Watching apiserver" Dec 13 01:36:49.669573 kubelet[2465]: I1213 01:36:49.669460 2465 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:36:49.701365 kubelet[2465]: E1213 01:36:49.701311 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:49.702291 kubelet[2465]: E1213 01:36:49.702270 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:49.708603 kubelet[2465]: E1213 01:36:49.708544 2465 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:49.708817 kubelet[2465]: E1213 01:36:49.708793 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:49.740297 kubelet[2465]: I1213 01:36:49.740031 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.739996488 podStartE2EDuration="1.739996488s" podCreationTimestamp="2024-12-13 01:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:36:49.728614712 +0000 UTC m=+1.131243747" watchObservedRunningTime="2024-12-13 01:36:49.739996488 +0000 UTC m=+1.142625513" Dec 13 01:36:49.740297 kubelet[2465]: I1213 01:36:49.740187 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.740183247 podStartE2EDuration="3.740183247s" podCreationTimestamp="2024-12-13 01:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:36:49.739121597 +0000 UTC m=+1.141750622" watchObservedRunningTime="2024-12-13 01:36:49.740183247 +0000 UTC m=+1.142812272" Dec 13 01:36:49.746091 kubelet[2465]: I1213 01:36:49.746018 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.746003724 podStartE2EDuration="3.746003724s" podCreationTimestamp="2024-12-13 01:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:36:49.745926976 +0000 UTC m=+1.148556002" watchObservedRunningTime="2024-12-13 01:36:49.746003724 +0000 UTC m=+1.148632749" Dec 13 01:36:49.889963 sudo[1606]: pam_unix(sudo:session): session closed for user root Dec 13 01:36:49.892226 sshd[1603]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:49.895991 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:39966.service: Deactivated successfully. Dec 13 01:36:49.898866 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:36:49.899143 systemd[1]: session-5.scope: Consumed 4.840s CPU time, 159.6M memory peak, 0B memory swap peak. Dec 13 01:36:49.901091 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:36:49.902531 systemd-logind[1459]: Removed session 5. Dec 13 01:36:50.702195 kubelet[2465]: E1213 01:36:50.702155 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:51.868828 kubelet[2465]: E1213 01:36:51.868782 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:52.651808 kubelet[2465]: I1213 01:36:52.651746 2465 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:36:52.652174 containerd[1475]: time="2024-12-13T01:36:52.652102274Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:36:52.652664 kubelet[2465]: I1213 01:36:52.652324 2465 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:36:53.493455 systemd[1]: Created slice kubepods-besteffort-podf880c450_24b0_4648_a449_9c38da728428.slice - libcontainer container kubepods-besteffort-podf880c450_24b0_4648_a449_9c38da728428.slice. Dec 13 01:36:53.501471 kubelet[2465]: I1213 01:36:53.501415 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/68e72ee8-206a-4f87-812a-b806fc16094b-cni-plugin\") pod \"kube-flannel-ds-vh6fh\" (UID: \"68e72ee8-206a-4f87-812a-b806fc16094b\") " pod="kube-flannel/kube-flannel-ds-vh6fh" Dec 13 01:36:53.501939 kubelet[2465]: I1213 01:36:53.501517 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/68e72ee8-206a-4f87-812a-b806fc16094b-flannel-cfg\") pod \"kube-flannel-ds-vh6fh\" (UID: \"68e72ee8-206a-4f87-812a-b806fc16094b\") " pod="kube-flannel/kube-flannel-ds-vh6fh" Dec 13 01:36:53.501939 kubelet[2465]: I1213 01:36:53.501539 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68e72ee8-206a-4f87-812a-b806fc16094b-xtables-lock\") pod \"kube-flannel-ds-vh6fh\" (UID: \"68e72ee8-206a-4f87-812a-b806fc16094b\") " pod="kube-flannel/kube-flannel-ds-vh6fh" Dec 13 01:36:53.501939 kubelet[2465]: I1213 01:36:53.501609 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw5b5\" (UniqueName: \"kubernetes.io/projected/68e72ee8-206a-4f87-812a-b806fc16094b-kube-api-access-sw5b5\") pod \"kube-flannel-ds-vh6fh\" (UID: \"68e72ee8-206a-4f87-812a-b806fc16094b\") " pod="kube-flannel/kube-flannel-ds-vh6fh" Dec 13 01:36:53.501939 kubelet[2465]: I1213 01:36:53.501679 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f880c450-24b0-4648-a449-9c38da728428-lib-modules\") pod \"kube-proxy-8bcdz\" (UID: \"f880c450-24b0-4648-a449-9c38da728428\") " pod="kube-system/kube-proxy-8bcdz" Dec 13 01:36:53.501939 kubelet[2465]: I1213 01:36:53.501711 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f880c450-24b0-4648-a449-9c38da728428-xtables-lock\") pod \"kube-proxy-8bcdz\" (UID: \"f880c450-24b0-4648-a449-9c38da728428\") " pod="kube-system/kube-proxy-8bcdz" Dec 13 01:36:53.502110 kubelet[2465]: I1213 01:36:53.501775 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/68e72ee8-206a-4f87-812a-b806fc16094b-cni\") pod \"kube-flannel-ds-vh6fh\" (UID: \"68e72ee8-206a-4f87-812a-b806fc16094b\") " pod="kube-flannel/kube-flannel-ds-vh6fh" Dec 13 01:36:53.502110 kubelet[2465]: I1213 01:36:53.501793 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f880c450-24b0-4648-a449-9c38da728428-kube-proxy\") pod \"kube-proxy-8bcdz\" (UID: \"f880c450-24b0-4648-a449-9c38da728428\") " pod="kube-system/kube-proxy-8bcdz" Dec 13 01:36:53.502110 kubelet[2465]: I1213 01:36:53.501869 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/68e72ee8-206a-4f87-812a-b806fc16094b-run\") pod \"kube-flannel-ds-vh6fh\" (UID: \"68e72ee8-206a-4f87-812a-b806fc16094b\") " pod="kube-flannel/kube-flannel-ds-vh6fh" Dec 13 01:36:53.502110 kubelet[2465]: I1213 01:36:53.501886 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n5p9\" (UniqueName: \"kubernetes.io/projected/f880c450-24b0-4648-a449-9c38da728428-kube-api-access-4n5p9\") pod \"kube-proxy-8bcdz\" (UID: \"f880c450-24b0-4648-a449-9c38da728428\") " pod="kube-system/kube-proxy-8bcdz" Dec 13 01:36:53.511368 systemd[1]: Created slice kubepods-burstable-pod68e72ee8_206a_4f87_812a_b806fc16094b.slice - libcontainer container kubepods-burstable-pod68e72ee8_206a_4f87_812a_b806fc16094b.slice. Dec 13 01:36:53.807076 kubelet[2465]: E1213 01:36:53.806877 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:53.807755 containerd[1475]: time="2024-12-13T01:36:53.807671640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8bcdz,Uid:f880c450-24b0-4648-a449-9c38da728428,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:53.814421 kubelet[2465]: E1213 01:36:53.814371 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:53.815094 containerd[1475]: time="2024-12-13T01:36:53.814900637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vh6fh,Uid:68e72ee8-206a-4f87-812a-b806fc16094b,Namespace:kube-flannel,Attempt:0,}" Dec 13 01:36:53.841928 containerd[1475]: time="2024-12-13T01:36:53.841770316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:53.842938 containerd[1475]: time="2024-12-13T01:36:53.841894854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:53.842938 containerd[1475]: time="2024-12-13T01:36:53.841918470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:53.843087 containerd[1475]: time="2024-12-13T01:36:53.842988845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:53.866009 containerd[1475]: time="2024-12-13T01:36:53.865805047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:53.866229 containerd[1475]: time="2024-12-13T01:36:53.865983658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:53.866375 containerd[1475]: time="2024-12-13T01:36:53.866202166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:53.866755 containerd[1475]: time="2024-12-13T01:36:53.866695098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:53.873046 systemd[1]: Started cri-containerd-cba13a29ac7a8b53235497471a00787d36e9fd070126ddaa8e60104f05742ef8.scope - libcontainer container cba13a29ac7a8b53235497471a00787d36e9fd070126ddaa8e60104f05742ef8. Dec 13 01:36:53.919817 systemd[1]: Started cri-containerd-2195892ed1bdf9591bd6386f6c480cd3120dfeadd2c19e79fd9538611502221a.scope - libcontainer container 2195892ed1bdf9591bd6386f6c480cd3120dfeadd2c19e79fd9538611502221a. Dec 13 01:36:53.938487 containerd[1475]: time="2024-12-13T01:36:53.938359979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8bcdz,Uid:f880c450-24b0-4648-a449-9c38da728428,Namespace:kube-system,Attempt:0,} returns sandbox id \"cba13a29ac7a8b53235497471a00787d36e9fd070126ddaa8e60104f05742ef8\"" Dec 13 01:36:53.940476 kubelet[2465]: E1213 01:36:53.939989 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:53.945210 containerd[1475]: time="2024-12-13T01:36:53.945170016Z" level=info msg="CreateContainer within sandbox \"cba13a29ac7a8b53235497471a00787d36e9fd070126ddaa8e60104f05742ef8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:36:53.969339 containerd[1475]: time="2024-12-13T01:36:53.969276694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vh6fh,Uid:68e72ee8-206a-4f87-812a-b806fc16094b,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"2195892ed1bdf9591bd6386f6c480cd3120dfeadd2c19e79fd9538611502221a\"" Dec 13 01:36:53.970488 kubelet[2465]: E1213 01:36:53.970419 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:53.976295 containerd[1475]: time="2024-12-13T01:36:53.971721737Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 01:36:53.986665 containerd[1475]: time="2024-12-13T01:36:53.986590628Z" level=info msg="CreateContainer within sandbox \"cba13a29ac7a8b53235497471a00787d36e9fd070126ddaa8e60104f05742ef8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"12acc253b45180c52af286c5dbabede784f420ca2daab30be4f96ce842ae64d3\"" Dec 13 01:36:53.987428 containerd[1475]: time="2024-12-13T01:36:53.987393773Z" level=info msg="StartContainer for \"12acc253b45180c52af286c5dbabede784f420ca2daab30be4f96ce842ae64d3\"" Dec 13 01:36:54.027294 systemd[1]: Started cri-containerd-12acc253b45180c52af286c5dbabede784f420ca2daab30be4f96ce842ae64d3.scope - libcontainer container 12acc253b45180c52af286c5dbabede784f420ca2daab30be4f96ce842ae64d3. Dec 13 01:36:54.073804 containerd[1475]: time="2024-12-13T01:36:54.073591238Z" level=info msg="StartContainer for \"12acc253b45180c52af286c5dbabede784f420ca2daab30be4f96ce842ae64d3\" returns successfully" Dec 13 01:36:54.713873 kubelet[2465]: E1213 01:36:54.712323 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:54.886553 kubelet[2465]: I1213 01:36:54.886455 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8bcdz" podStartSLOduration=1.886426843 podStartE2EDuration="1.886426843s" podCreationTimestamp="2024-12-13 01:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:36:54.886038411 +0000 UTC m=+6.288667436" watchObservedRunningTime="2024-12-13 01:36:54.886426843 +0000 UTC m=+6.289055868" Dec 13 01:36:56.356889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441729718.mount: Deactivated successfully. Dec 13 01:36:56.403857 containerd[1475]: time="2024-12-13T01:36:56.403750588Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:56.404790 containerd[1475]: time="2024-12-13T01:36:56.404744742Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Dec 13 01:36:56.406357 containerd[1475]: time="2024-12-13T01:36:56.406318399Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:56.408851 containerd[1475]: time="2024-12-13T01:36:56.408801129Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:56.409623 containerd[1475]: time="2024-12-13T01:36:56.409571277Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.437815224s" Dec 13 01:36:56.409623 containerd[1475]: time="2024-12-13T01:36:56.409619108Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 01:36:56.412069 containerd[1475]: time="2024-12-13T01:36:56.412008731Z" level=info msg="CreateContainer within sandbox \"2195892ed1bdf9591bd6386f6c480cd3120dfeadd2c19e79fd9538611502221a\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 01:36:56.431188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3155782114.mount: Deactivated successfully. Dec 13 01:36:56.436195 containerd[1475]: time="2024-12-13T01:36:56.436134031Z" level=info msg="CreateContainer within sandbox \"2195892ed1bdf9591bd6386f6c480cd3120dfeadd2c19e79fd9538611502221a\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"c02cacf8524fddb56c22504dc53352fd748af6eee0bb6ec104f5b1cd2692e1f0\"" Dec 13 01:36:56.436860 containerd[1475]: time="2024-12-13T01:36:56.436801933Z" level=info msg="StartContainer for \"c02cacf8524fddb56c22504dc53352fd748af6eee0bb6ec104f5b1cd2692e1f0\"" Dec 13 01:36:56.481054 systemd[1]: Started cri-containerd-c02cacf8524fddb56c22504dc53352fd748af6eee0bb6ec104f5b1cd2692e1f0.scope - libcontainer container c02cacf8524fddb56c22504dc53352fd748af6eee0bb6ec104f5b1cd2692e1f0. Dec 13 01:36:56.513993 systemd[1]: cri-containerd-c02cacf8524fddb56c22504dc53352fd748af6eee0bb6ec104f5b1cd2692e1f0.scope: Deactivated successfully. Dec 13 01:36:56.515788 containerd[1475]: time="2024-12-13T01:36:56.515706903Z" level=info msg="StartContainer for \"c02cacf8524fddb56c22504dc53352fd748af6eee0bb6ec104f5b1cd2692e1f0\" returns successfully" Dec 13 01:36:56.584889 containerd[1475]: time="2024-12-13T01:36:56.584763860Z" level=info msg="shim disconnected" id=c02cacf8524fddb56c22504dc53352fd748af6eee0bb6ec104f5b1cd2692e1f0 namespace=k8s.io Dec 13 01:36:56.584889 containerd[1475]: time="2024-12-13T01:36:56.584882116Z" level=warning msg="cleaning up after shim disconnected" id=c02cacf8524fddb56c22504dc53352fd748af6eee0bb6ec104f5b1cd2692e1f0 namespace=k8s.io Dec 13 01:36:56.584889 containerd[1475]: time="2024-12-13T01:36:56.584897385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:36:56.718303 kubelet[2465]: E1213 01:36:56.718258 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:56.719651 containerd[1475]: time="2024-12-13T01:36:56.719610653Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 01:36:57.258153 kubelet[2465]: E1213 01:36:57.257985 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:57.281957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c02cacf8524fddb56c22504dc53352fd748af6eee0bb6ec104f5b1cd2692e1f0-rootfs.mount: Deactivated successfully. Dec 13 01:36:57.719535 kubelet[2465]: E1213 01:36:57.719479 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:59.295033 update_engine[1462]: I20241213 01:36:59.294935 1462 update_attempter.cc:509] Updating boot flags... Dec 13 01:36:59.297118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1397154820.mount: Deactivated successfully. Dec 13 01:36:59.343885 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2851) Dec 13 01:36:59.409924 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2854) Dec 13 01:36:59.438381 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2854) Dec 13 01:36:59.577946 kubelet[2465]: E1213 01:36:59.577759 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:01.003235 containerd[1475]: time="2024-12-13T01:37:01.003145157Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:37:01.003999 containerd[1475]: time="2024-12-13T01:37:01.003917963Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Dec 13 01:37:01.005056 containerd[1475]: time="2024-12-13T01:37:01.005018441Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:37:01.007950 containerd[1475]: time="2024-12-13T01:37:01.007882003Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:37:01.008781 containerd[1475]: time="2024-12-13T01:37:01.008745811Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.289092737s" Dec 13 01:37:01.008872 containerd[1475]: time="2024-12-13T01:37:01.008784063Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 01:37:01.010910 containerd[1475]: time="2024-12-13T01:37:01.010872615Z" level=info msg="CreateContainer within sandbox \"2195892ed1bdf9591bd6386f6c480cd3120dfeadd2c19e79fd9538611502221a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:37:01.024894 containerd[1475]: time="2024-12-13T01:37:01.024807748Z" level=info msg="CreateContainer within sandbox \"2195892ed1bdf9591bd6386f6c480cd3120dfeadd2c19e79fd9538611502221a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"29917e1be930ad90795a3ef39017f5282caa8a30f03d2971e26dcb86cf319f0f\"" Dec 13 01:37:01.025008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4247540864.mount: Deactivated successfully. Dec 13 01:37:01.026443 containerd[1475]: time="2024-12-13T01:37:01.025462200Z" level=info msg="StartContainer for \"29917e1be930ad90795a3ef39017f5282caa8a30f03d2971e26dcb86cf319f0f\"" Dec 13 01:37:01.055995 systemd[1]: Started cri-containerd-29917e1be930ad90795a3ef39017f5282caa8a30f03d2971e26dcb86cf319f0f.scope - libcontainer container 29917e1be930ad90795a3ef39017f5282caa8a30f03d2971e26dcb86cf319f0f. Dec 13 01:37:01.086276 systemd[1]: cri-containerd-29917e1be930ad90795a3ef39017f5282caa8a30f03d2971e26dcb86cf319f0f.scope: Deactivated successfully. Dec 13 01:37:01.087551 containerd[1475]: time="2024-12-13T01:37:01.087512651Z" level=info msg="StartContainer for \"29917e1be930ad90795a3ef39017f5282caa8a30f03d2971e26dcb86cf319f0f\" returns successfully" Dec 13 01:37:01.108232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29917e1be930ad90795a3ef39017f5282caa8a30f03d2971e26dcb86cf319f0f-rootfs.mount: Deactivated successfully. Dec 13 01:37:01.170787 kubelet[2465]: I1213 01:37:01.170747 2465 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:37:01.205991 systemd[1]: Created slice kubepods-burstable-pod923b5899_072e_4cee_a0d5_4c246a06ee27.slice - libcontainer container kubepods-burstable-pod923b5899_072e_4cee_a0d5_4c246a06ee27.slice. Dec 13 01:37:01.213166 systemd[1]: Created slice kubepods-burstable-pod8cbbd108_8ebd_4327_af5a_0317df1cdc81.slice - libcontainer container kubepods-burstable-pod8cbbd108_8ebd_4327_af5a_0317df1cdc81.slice. Dec 13 01:37:01.245659 kubelet[2465]: I1213 01:37:01.245611 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/923b5899-072e-4cee-a0d5-4c246a06ee27-config-volume\") pod \"coredns-6f6b679f8f-8cvvl\" (UID: \"923b5899-072e-4cee-a0d5-4c246a06ee27\") " pod="kube-system/coredns-6f6b679f8f-8cvvl" Dec 13 01:37:01.245659 kubelet[2465]: I1213 01:37:01.245666 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz68h\" (UniqueName: \"kubernetes.io/projected/923b5899-072e-4cee-a0d5-4c246a06ee27-kube-api-access-dz68h\") pod \"coredns-6f6b679f8f-8cvvl\" (UID: \"923b5899-072e-4cee-a0d5-4c246a06ee27\") " pod="kube-system/coredns-6f6b679f8f-8cvvl" Dec 13 01:37:01.246249 kubelet[2465]: I1213 01:37:01.245687 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cbbd108-8ebd-4327-af5a-0317df1cdc81-config-volume\") pod \"coredns-6f6b679f8f-rhcz2\" (UID: \"8cbbd108-8ebd-4327-af5a-0317df1cdc81\") " pod="kube-system/coredns-6f6b679f8f-rhcz2" Dec 13 01:37:01.246249 kubelet[2465]: I1213 01:37:01.245704 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7k4c\" (UniqueName: \"kubernetes.io/projected/8cbbd108-8ebd-4327-af5a-0317df1cdc81-kube-api-access-x7k4c\") pod \"coredns-6f6b679f8f-rhcz2\" (UID: \"8cbbd108-8ebd-4327-af5a-0317df1cdc81\") " pod="kube-system/coredns-6f6b679f8f-rhcz2" Dec 13 01:37:01.663401 containerd[1475]: time="2024-12-13T01:37:01.663316509Z" level=info msg="shim disconnected" id=29917e1be930ad90795a3ef39017f5282caa8a30f03d2971e26dcb86cf319f0f namespace=k8s.io Dec 13 01:37:01.663401 containerd[1475]: time="2024-12-13T01:37:01.663391821Z" level=warning msg="cleaning up after shim disconnected" id=29917e1be930ad90795a3ef39017f5282caa8a30f03d2971e26dcb86cf319f0f namespace=k8s.io Dec 13 01:37:01.663401 containerd[1475]: time="2024-12-13T01:37:01.663402592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:01.810573 kubelet[2465]: E1213 01:37:01.810512 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:01.811231 containerd[1475]: time="2024-12-13T01:37:01.811165129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8cvvl,Uid:923b5899-072e-4cee-a0d5-4c246a06ee27,Namespace:kube-system,Attempt:0,}" Dec 13 01:37:01.814283 kubelet[2465]: E1213 01:37:01.814228 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:01.815992 kubelet[2465]: E1213 01:37:01.815963 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:01.816512 containerd[1475]: time="2024-12-13T01:37:01.816433042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhcz2,Uid:8cbbd108-8ebd-4327-af5a-0317df1cdc81,Namespace:kube-system,Attempt:0,}" Dec 13 01:37:01.816828 containerd[1475]: time="2024-12-13T01:37:01.816704307Z" level=info msg="CreateContainer within sandbox \"2195892ed1bdf9591bd6386f6c480cd3120dfeadd2c19e79fd9538611502221a\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 01:37:01.872418 kubelet[2465]: E1213 01:37:01.872379 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:02.219068 containerd[1475]: time="2024-12-13T01:37:02.218999038Z" level=info msg="CreateContainer within sandbox \"2195892ed1bdf9591bd6386f6c480cd3120dfeadd2c19e79fd9538611502221a\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"05308358a39a1bacec135febf2707d78951b770cbd48bdbcd7f621db7498f479\"" Dec 13 01:37:02.221639 containerd[1475]: time="2024-12-13T01:37:02.221562187Z" level=info msg="StartContainer for \"05308358a39a1bacec135febf2707d78951b770cbd48bdbcd7f621db7498f479\"" Dec 13 01:37:02.260928 containerd[1475]: time="2024-12-13T01:37:02.260848696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhcz2,Uid:8cbbd108-8ebd-4327-af5a-0317df1cdc81,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9fa3d884689a4a111c3c5e49f34ea9e29e92e5be850cf9b3a33005c9a26073d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:37:02.261138 kubelet[2465]: E1213 01:37:02.261103 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9fa3d884689a4a111c3c5e49f34ea9e29e92e5be850cf9b3a33005c9a26073d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:37:02.264292 kubelet[2465]: E1213 01:37:02.261206 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9fa3d884689a4a111c3c5e49f34ea9e29e92e5be850cf9b3a33005c9a26073d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-rhcz2" Dec 13 01:37:02.264292 kubelet[2465]: E1213 01:37:02.261229 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9fa3d884689a4a111c3c5e49f34ea9e29e92e5be850cf9b3a33005c9a26073d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-rhcz2" Dec 13 01:37:02.264292 kubelet[2465]: E1213 01:37:02.261275 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-rhcz2_kube-system(8cbbd108-8ebd-4327-af5a-0317df1cdc81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-rhcz2_kube-system(8cbbd108-8ebd-4327-af5a-0317df1cdc81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9fa3d884689a4a111c3c5e49f34ea9e29e92e5be850cf9b3a33005c9a26073d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-rhcz2" podUID="8cbbd108-8ebd-4327-af5a-0317df1cdc81" Dec 13 01:37:02.264292 kubelet[2465]: E1213 01:37:02.263711 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26675a3b07ba60e00515683f0d77d4cb931ec624493aa725ccc6eef819deaf0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:37:02.264191 systemd[1]: Started cri-containerd-05308358a39a1bacec135febf2707d78951b770cbd48bdbcd7f621db7498f479.scope - libcontainer container 05308358a39a1bacec135febf2707d78951b770cbd48bdbcd7f621db7498f479. Dec 13 01:37:02.264881 containerd[1475]: time="2024-12-13T01:37:02.263302537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8cvvl,Uid:923b5899-072e-4cee-a0d5-4c246a06ee27,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c26675a3b07ba60e00515683f0d77d4cb931ec624493aa725ccc6eef819deaf0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:37:02.264952 kubelet[2465]: E1213 01:37:02.263802 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26675a3b07ba60e00515683f0d77d4cb931ec624493aa725ccc6eef819deaf0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-8cvvl" Dec 13 01:37:02.264952 kubelet[2465]: E1213 01:37:02.263868 2465 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26675a3b07ba60e00515683f0d77d4cb931ec624493aa725ccc6eef819deaf0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-8cvvl" Dec 13 01:37:02.264952 kubelet[2465]: E1213 01:37:02.263924 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8cvvl_kube-system(923b5899-072e-4cee-a0d5-4c246a06ee27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8cvvl_kube-system(923b5899-072e-4cee-a0d5-4c246a06ee27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c26675a3b07ba60e00515683f0d77d4cb931ec624493aa725ccc6eef819deaf0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-8cvvl" podUID="923b5899-072e-4cee-a0d5-4c246a06ee27" Dec 13 01:37:02.300213 containerd[1475]: time="2024-12-13T01:37:02.300129143Z" level=info msg="StartContainer for \"05308358a39a1bacec135febf2707d78951b770cbd48bdbcd7f621db7498f479\" returns successfully" Dec 13 01:37:02.817979 kubelet[2465]: E1213 01:37:02.817934 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:02.969449 kubelet[2465]: I1213 01:37:02.969371 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-vh6fh" podStartSLOduration=2.9309962880000002 podStartE2EDuration="9.969343017s" podCreationTimestamp="2024-12-13 01:36:53 +0000 UTC" firstStartedPulling="2024-12-13 01:36:53.971277308 +0000 UTC m=+5.373906333" lastFinishedPulling="2024-12-13 01:37:01.009624037 +0000 UTC m=+12.412253062" observedRunningTime="2024-12-13 01:37:02.96917058 +0000 UTC m=+14.371799605" watchObservedRunningTime="2024-12-13 01:37:02.969343017 +0000 UTC m=+14.371972042" Dec 13 01:37:03.023333 systemd[1]: run-netns-cni\x2d91fa7aaf\x2daf5f\x2d922d\x2dd9fb\x2dfa9bf9dce2f9.mount: Deactivated successfully. Dec 13 01:37:03.023472 systemd[1]: run-netns-cni\x2d8f4542b4\x2db2f1\x2d67d7\x2d6cf3\x2d3c3d2e957996.mount: Deactivated successfully. Dec 13 01:37:03.023569 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c26675a3b07ba60e00515683f0d77d4cb931ec624493aa725ccc6eef819deaf0-shm.mount: Deactivated successfully. Dec 13 01:37:03.023677 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9fa3d884689a4a111c3c5e49f34ea9e29e92e5be850cf9b3a33005c9a26073d-shm.mount: Deactivated successfully. Dec 13 01:37:03.408025 systemd-networkd[1409]: flannel.1: Link UP Dec 13 01:37:03.408041 systemd-networkd[1409]: flannel.1: Gained carrier Dec 13 01:37:03.820164 kubelet[2465]: E1213 01:37:03.819981 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:04.465063 systemd-networkd[1409]: flannel.1: Gained IPv6LL Dec 13 01:37:12.686960 kubelet[2465]: E1213 01:37:12.686776 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:12.687897 containerd[1475]: time="2024-12-13T01:37:12.687403980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8cvvl,Uid:923b5899-072e-4cee-a0d5-4c246a06ee27,Namespace:kube-system,Attempt:0,}" Dec 13 01:37:13.607604 systemd-networkd[1409]: cni0: Link UP Dec 13 01:37:13.608171 systemd-networkd[1409]: cni0: Gained carrier Dec 13 01:37:13.612562 systemd-networkd[1409]: cni0: Lost carrier Dec 13 01:37:13.618766 systemd-networkd[1409]: veth13607f2b: Link UP Dec 13 01:37:13.641047 kernel: cni0: port 1(veth13607f2b) entered blocking state Dec 13 01:37:13.641170 kernel: cni0: port 1(veth13607f2b) entered disabled state Dec 13 01:37:13.641200 kernel: veth13607f2b: entered allmulticast mode Dec 13 01:37:13.642695 kernel: veth13607f2b: entered promiscuous mode Dec 13 01:37:13.642734 kernel: cni0: port 1(veth13607f2b) entered blocking state Dec 13 01:37:13.642748 kernel: cni0: port 1(veth13607f2b) entered forwarding state Dec 13 01:37:13.644722 kernel: cni0: port 1(veth13607f2b) entered disabled state Dec 13 01:37:13.722893 kernel: cni0: port 1(veth13607f2b) entered blocking state Dec 13 01:37:13.722980 kernel: cni0: port 1(veth13607f2b) entered forwarding state Dec 13 01:37:13.722809 systemd-networkd[1409]: veth13607f2b: Gained carrier Dec 13 01:37:13.723397 systemd-networkd[1409]: cni0: Gained carrier Dec 13 01:37:13.725630 containerd[1475]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001e938), "name":"cbr0", "type":"bridge"} Dec 13 01:37:13.725630 containerd[1475]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:37:13.847616 containerd[1475]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:37:13.847495540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:37:13.847616 containerd[1475]: time="2024-12-13T01:37:13.847559050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:37:13.847616 containerd[1475]: time="2024-12-13T01:37:13.847571043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:13.867936 containerd[1475]: time="2024-12-13T01:37:13.847663377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:13.894025 systemd[1]: Started cri-containerd-2ba6b44dac9e8d5d652eb4cbbb460c0c2d97f72dd809e6f23826e30a5c85baa8.scope - libcontainer container 2ba6b44dac9e8d5d652eb4cbbb460c0c2d97f72dd809e6f23826e30a5c85baa8. Dec 13 01:37:13.908225 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:37:13.934510 containerd[1475]: time="2024-12-13T01:37:13.934468428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8cvvl,Uid:923b5899-072e-4cee-a0d5-4c246a06ee27,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ba6b44dac9e8d5d652eb4cbbb460c0c2d97f72dd809e6f23826e30a5c85baa8\"" Dec 13 01:37:13.935225 kubelet[2465]: E1213 01:37:13.935195 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:13.937243 containerd[1475]: time="2024-12-13T01:37:13.937215217Z" level=info msg="CreateContainer within sandbox \"2ba6b44dac9e8d5d652eb4cbbb460c0c2d97f72dd809e6f23826e30a5c85baa8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:37:14.370678 containerd[1475]: time="2024-12-13T01:37:14.370573960Z" level=info msg="CreateContainer within sandbox \"2ba6b44dac9e8d5d652eb4cbbb460c0c2d97f72dd809e6f23826e30a5c85baa8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e9fa5edbc9ea6e1b0a914403463bf798f98fa3b29298d7e1ecff9ae62659cca\"" Dec 13 01:37:14.371463 containerd[1475]: time="2024-12-13T01:37:14.371403935Z" level=info msg="StartContainer for \"0e9fa5edbc9ea6e1b0a914403463bf798f98fa3b29298d7e1ecff9ae62659cca\"" Dec 13 01:37:14.411148 systemd[1]: Started cri-containerd-0e9fa5edbc9ea6e1b0a914403463bf798f98fa3b29298d7e1ecff9ae62659cca.scope - libcontainer container 0e9fa5edbc9ea6e1b0a914403463bf798f98fa3b29298d7e1ecff9ae62659cca. Dec 13 01:37:14.521679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3437473165.mount: Deactivated successfully. Dec 13 01:37:14.567273 containerd[1475]: time="2024-12-13T01:37:14.567182190Z" level=info msg="StartContainer for \"0e9fa5edbc9ea6e1b0a914403463bf798f98fa3b29298d7e1ecff9ae62659cca\" returns successfully" Dec 13 01:37:14.686851 kubelet[2465]: E1213 01:37:14.686789 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:14.687392 containerd[1475]: time="2024-12-13T01:37:14.687355134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhcz2,Uid:8cbbd108-8ebd-4327-af5a-0317df1cdc81,Namespace:kube-system,Attempt:0,}" Dec 13 01:37:14.830271 systemd-networkd[1409]: veth0e8e84c4: Link UP Dec 13 01:37:14.831976 kernel: cni0: port 2(veth0e8e84c4) entered blocking state Dec 13 01:37:14.832023 kernel: cni0: port 2(veth0e8e84c4) entered disabled state Dec 13 01:37:14.837050 kernel: veth0e8e84c4: entered allmulticast mode Dec 13 01:37:14.837127 kernel: veth0e8e84c4: entered promiscuous mode Dec 13 01:37:14.844338 kernel: cni0: port 2(veth0e8e84c4) entered blocking state Dec 13 01:37:14.844450 kernel: cni0: port 2(veth0e8e84c4) entered forwarding state Dec 13 01:37:14.844432 systemd-networkd[1409]: veth0e8e84c4: Gained carrier Dec 13 01:37:14.846109 kubelet[2465]: E1213 01:37:14.845874 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:14.849878 containerd[1475]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00011c8e8), "name":"cbr0", "type":"bridge"} Dec 13 01:37:14.849878 containerd[1475]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:37:14.883331 containerd[1475]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:37:14.882464107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:37:14.883331 containerd[1475]: time="2024-12-13T01:37:14.883305463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:37:14.883518 containerd[1475]: time="2024-12-13T01:37:14.883325020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:14.883518 containerd[1475]: time="2024-12-13T01:37:14.883440598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:14.910065 systemd[1]: Started cri-containerd-10456a5cc792bdeda17e546338e4c408035a3f693b747ef1523ce12653d7b3d1.scope - libcontainer container 10456a5cc792bdeda17e546338e4c408035a3f693b747ef1523ce12653d7b3d1. Dec 13 01:37:14.926063 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:37:14.958693 containerd[1475]: time="2024-12-13T01:37:14.958509908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhcz2,Uid:8cbbd108-8ebd-4327-af5a-0317df1cdc81,Namespace:kube-system,Attempt:0,} returns sandbox id \"10456a5cc792bdeda17e546338e4c408035a3f693b747ef1523ce12653d7b3d1\"" Dec 13 01:37:14.959903 kubelet[2465]: E1213 01:37:14.959879 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:14.961978 containerd[1475]: time="2024-12-13T01:37:14.961923040Z" level=info msg="CreateContainer within sandbox \"10456a5cc792bdeda17e546338e4c408035a3f693b747ef1523ce12653d7b3d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:37:15.030158 kubelet[2465]: I1213 01:37:15.030050 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8cvvl" podStartSLOduration=22.030026825 podStartE2EDuration="22.030026825s" podCreationTimestamp="2024-12-13 01:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:37:14.947070388 +0000 UTC m=+26.349699413" watchObservedRunningTime="2024-12-13 01:37:15.030026825 +0000 UTC m=+26.432655851" Dec 13 01:37:15.144611 containerd[1475]: time="2024-12-13T01:37:15.144529257Z" level=info msg="CreateContainer within sandbox \"10456a5cc792bdeda17e546338e4c408035a3f693b747ef1523ce12653d7b3d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93f5283e3a90a0107bf6db9724ccf3a1e6c2fc256e27eab204b18bb8d30eb6c4\"" Dec 13 01:37:15.146577 containerd[1475]: time="2024-12-13T01:37:15.145360432Z" level=info msg="StartContainer for \"93f5283e3a90a0107bf6db9724ccf3a1e6c2fc256e27eab204b18bb8d30eb6c4\"" Dec 13 01:37:15.183024 systemd[1]: Started cri-containerd-93f5283e3a90a0107bf6db9724ccf3a1e6c2fc256e27eab204b18bb8d30eb6c4.scope - libcontainer container 93f5283e3a90a0107bf6db9724ccf3a1e6c2fc256e27eab204b18bb8d30eb6c4. Dec 13 01:37:15.218296 containerd[1475]: time="2024-12-13T01:37:15.218113765Z" level=info msg="StartContainer for \"93f5283e3a90a0107bf6db9724ccf3a1e6c2fc256e27eab204b18bb8d30eb6c4\" returns successfully" Dec 13 01:37:15.281197 systemd-networkd[1409]: cni0: Gained IPv6LL Dec 13 01:37:15.665018 systemd-networkd[1409]: veth13607f2b: Gained IPv6LL Dec 13 01:37:15.852335 kubelet[2465]: E1213 01:37:15.852278 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:15.852335 kubelet[2465]: E1213 01:37:15.852318 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:15.956208 kubelet[2465]: I1213 01:37:15.956040 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rhcz2" podStartSLOduration=22.956019195 podStartE2EDuration="22.956019195s" podCreationTimestamp="2024-12-13 01:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:37:15.955191075 +0000 UTC m=+27.357820110" watchObservedRunningTime="2024-12-13 01:37:15.956019195 +0000 UTC m=+27.358648220" Dec 13 01:37:16.113222 systemd-networkd[1409]: veth0e8e84c4: Gained IPv6LL Dec 13 01:37:16.618803 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:37616.service - OpenSSH per-connection server daemon (10.0.0.1:37616). Dec 13 01:37:16.663877 sshd[3399]: Accepted publickey for core from 10.0.0.1 port 37616 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:16.666651 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:16.674952 systemd-logind[1459]: New session 6 of user core. Dec 13 01:37:16.682055 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:37:16.854614 kubelet[2465]: E1213 01:37:16.854555 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:16.882934 sshd[3399]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:16.888439 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:37616.service: Deactivated successfully. Dec 13 01:37:16.891441 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:37:16.892341 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:37:16.893697 systemd-logind[1459]: Removed session 6. Dec 13 01:37:21.816685 kubelet[2465]: E1213 01:37:21.816629 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:21.863956 kubelet[2465]: E1213 01:37:21.863912 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:21.897686 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:37620.service - OpenSSH per-connection server daemon (10.0.0.1:37620). Dec 13 01:37:21.942407 sshd[3443]: Accepted publickey for core from 10.0.0.1 port 37620 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:21.944565 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:21.949325 systemd-logind[1459]: New session 7 of user core. Dec 13 01:37:21.965211 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:37:22.086471 sshd[3443]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:22.091441 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:37620.service: Deactivated successfully. Dec 13 01:37:22.094015 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:37:22.094760 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:37:22.096047 systemd-logind[1459]: Removed session 7. Dec 13 01:37:27.098148 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:47748.service - OpenSSH per-connection server daemon (10.0.0.1:47748). Dec 13 01:37:27.136662 sshd[3481]: Accepted publickey for core from 10.0.0.1 port 47748 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:27.138450 sshd[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:27.143067 systemd-logind[1459]: New session 8 of user core. Dec 13 01:37:27.156029 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:37:27.314883 sshd[3481]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:27.319910 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:47748.service: Deactivated successfully. Dec 13 01:37:27.322136 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:37:27.322820 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:37:27.323779 systemd-logind[1459]: Removed session 8. Dec 13 01:37:32.295481 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:47756.service - OpenSSH per-connection server daemon (10.0.0.1:47756). Dec 13 01:37:32.601652 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 47756 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:32.603566 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:32.607982 systemd-logind[1459]: New session 9 of user core. Dec 13 01:37:32.620030 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:37:32.859729 sshd[3517]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:32.865393 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:47756.service: Deactivated successfully. Dec 13 01:37:32.867812 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:37:32.868600 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:37:32.869659 systemd-logind[1459]: Removed session 9. Dec 13 01:37:37.872575 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:46800.service - OpenSSH per-connection server daemon (10.0.0.1:46800). Dec 13 01:37:37.913603 sshd[3553]: Accepted publickey for core from 10.0.0.1 port 46800 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:37.915867 sshd[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:37.921717 systemd-logind[1459]: New session 10 of user core. Dec 13 01:37:37.931056 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:37:38.049514 sshd[3553]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:38.059401 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:46800.service: Deactivated successfully. Dec 13 01:37:38.061530 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:37:38.063108 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:37:38.075247 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:46804.service - OpenSSH per-connection server daemon (10.0.0.1:46804). Dec 13 01:37:38.076538 systemd-logind[1459]: Removed session 10. Dec 13 01:37:38.112906 sshd[3568]: Accepted publickey for core from 10.0.0.1 port 46804 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:38.122515 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:38.130253 systemd-logind[1459]: New session 11 of user core. Dec 13 01:37:38.142155 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:37:38.351087 sshd[3568]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:38.360128 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:46804.service: Deactivated successfully. Dec 13 01:37:38.362328 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:37:38.364167 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:37:38.374360 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:46810.service - OpenSSH per-connection server daemon (10.0.0.1:46810). Dec 13 01:37:38.375870 systemd-logind[1459]: Removed session 11. Dec 13 01:37:38.408245 sshd[3580]: Accepted publickey for core from 10.0.0.1 port 46810 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:38.409903 sshd[3580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:38.414196 systemd-logind[1459]: New session 12 of user core. Dec 13 01:37:38.426976 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:37:38.576124 sshd[3580]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:38.581498 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:46810.service: Deactivated successfully. Dec 13 01:37:38.584053 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:37:38.584801 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:37:38.585759 systemd-logind[1459]: Removed session 12. Dec 13 01:37:43.589211 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:46812.service - OpenSSH per-connection server daemon (10.0.0.1:46812). Dec 13 01:37:43.627432 sshd[3622]: Accepted publickey for core from 10.0.0.1 port 46812 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:43.629152 sshd[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:43.633537 systemd-logind[1459]: New session 13 of user core. Dec 13 01:37:43.647981 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:37:43.758646 sshd[3622]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:43.762788 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:46812.service: Deactivated successfully. Dec 13 01:37:43.765157 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:37:43.765892 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:37:43.767294 systemd-logind[1459]: Removed session 13. Dec 13 01:37:48.778350 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:35946.service - OpenSSH per-connection server daemon (10.0.0.1:35946). Dec 13 01:37:48.833533 sshd[3675]: Accepted publickey for core from 10.0.0.1 port 35946 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:48.836266 sshd[3675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:48.846830 systemd-logind[1459]: New session 14 of user core. Dec 13 01:37:48.858376 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:37:49.028457 sshd[3675]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:49.046685 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:35946.service: Deactivated successfully. Dec 13 01:37:49.049950 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:37:49.054813 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:37:49.070940 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:35954.service - OpenSSH per-connection server daemon (10.0.0.1:35954). Dec 13 01:37:49.073040 systemd-logind[1459]: Removed session 14. Dec 13 01:37:49.116275 sshd[3690]: Accepted publickey for core from 10.0.0.1 port 35954 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:49.116557 sshd[3690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:49.125711 systemd-logind[1459]: New session 15 of user core. Dec 13 01:37:49.133896 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:37:49.494205 sshd[3690]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:49.508572 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:35954.service: Deactivated successfully. Dec 13 01:37:49.510731 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:37:49.512712 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:37:49.521234 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:35964.service - OpenSSH per-connection server daemon (10.0.0.1:35964). Dec 13 01:37:49.523810 systemd-logind[1459]: Removed session 15. Dec 13 01:37:49.584131 sshd[3702]: Accepted publickey for core from 10.0.0.1 port 35964 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:49.587075 sshd[3702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:49.603383 systemd-logind[1459]: New session 16 of user core. Dec 13 01:37:49.617269 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:37:51.770217 sshd[3702]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:51.785706 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:35964.service: Deactivated successfully. Dec 13 01:37:51.799288 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:37:51.803786 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:37:51.823961 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:35974.service - OpenSSH per-connection server daemon (10.0.0.1:35974). Dec 13 01:37:51.838457 systemd-logind[1459]: Removed session 16. Dec 13 01:37:51.896039 sshd[3728]: Accepted publickey for core from 10.0.0.1 port 35974 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:51.900081 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:51.920009 systemd-logind[1459]: New session 17 of user core. Dec 13 01:37:51.954315 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:37:52.565678 sshd[3728]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:52.599152 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:35974.service: Deactivated successfully. Dec 13 01:37:52.604154 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:37:52.611941 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:37:52.623871 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:35990.service - OpenSSH per-connection server daemon (10.0.0.1:35990). Dec 13 01:37:52.627931 systemd-logind[1459]: Removed session 17. Dec 13 01:37:52.680825 sshd[3741]: Accepted publickey for core from 10.0.0.1 port 35990 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:52.686541 sshd[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:52.705495 systemd-logind[1459]: New session 18 of user core. Dec 13 01:37:52.724220 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:37:52.962203 sshd[3741]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:52.971620 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:35990.service: Deactivated successfully. Dec 13 01:37:52.977047 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:37:52.978161 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:37:52.981678 systemd-logind[1459]: Removed session 18. Dec 13 01:37:57.997868 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:43052.service - OpenSSH per-connection server daemon (10.0.0.1:43052). Dec 13 01:37:58.044873 sshd[3778]: Accepted publickey for core from 10.0.0.1 port 43052 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:58.049583 sshd[3778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:58.071476 systemd-logind[1459]: New session 19 of user core. Dec 13 01:37:58.082206 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:37:58.316575 sshd[3778]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:58.323368 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:43052.service: Deactivated successfully. Dec 13 01:37:58.328569 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:37:58.331168 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:37:58.342952 systemd-logind[1459]: Removed session 19. Dec 13 01:38:03.350366 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:43060.service - OpenSSH per-connection server daemon (10.0.0.1:43060). Dec 13 01:38:03.386286 sshd[3814]: Accepted publickey for core from 10.0.0.1 port 43060 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:03.388754 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:03.395624 systemd-logind[1459]: New session 20 of user core. Dec 13 01:38:03.406370 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:38:03.526871 sshd[3814]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:03.532726 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:43060.service: Deactivated successfully. Dec 13 01:38:03.537565 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:38:03.539027 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:38:03.540956 systemd-logind[1459]: Removed session 20. Dec 13 01:38:04.687096 kubelet[2465]: E1213 01:38:04.686931 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:08.539279 systemd[1]: Started sshd@20-10.0.0.115:22-10.0.0.1:55224.service - OpenSSH per-connection server daemon (10.0.0.1:55224). Dec 13 01:38:08.579021 sshd[3852]: Accepted publickey for core from 10.0.0.1 port 55224 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:08.580916 sshd[3852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:08.585154 systemd-logind[1459]: New session 21 of user core. Dec 13 01:38:08.594063 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:38:08.735453 sshd[3852]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:08.740579 systemd[1]: sshd@20-10.0.0.115:22-10.0.0.1:55224.service: Deactivated successfully. Dec 13 01:38:08.743531 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:38:08.744768 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:38:08.746499 systemd-logind[1459]: Removed session 21. Dec 13 01:38:09.686275 kubelet[2465]: E1213 01:38:09.686197 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:13.752503 systemd[1]: Started sshd@21-10.0.0.115:22-10.0.0.1:55226.service - OpenSSH per-connection server daemon (10.0.0.1:55226). Dec 13 01:38:13.797279 sshd[3893]: Accepted publickey for core from 10.0.0.1 port 55226 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:13.799670 sshd[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:13.805522 systemd-logind[1459]: New session 22 of user core. Dec 13 01:38:13.815139 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:38:13.927193 sshd[3893]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:13.931513 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:55226.service: Deactivated successfully. Dec 13 01:38:13.933926 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:38:13.934632 systemd-logind[1459]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:38:13.935639 systemd-logind[1459]: Removed session 22. Dec 13 01:38:14.686501 kubelet[2465]: E1213 01:38:14.686288 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:16.686907 kubelet[2465]: E1213 01:38:16.686735 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:18.687132 kubelet[2465]: E1213 01:38:18.687064 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:18.939035 systemd[1]: Started sshd@22-10.0.0.115:22-10.0.0.1:54402.service - OpenSSH per-connection server daemon (10.0.0.1:54402). Dec 13 01:38:18.981432 sshd[3928]: Accepted publickey for core from 10.0.0.1 port 54402 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:18.983539 sshd[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:18.988610 systemd-logind[1459]: New session 23 of user core. Dec 13 01:38:18.995993 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:38:19.114042 sshd[3928]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:19.118733 systemd[1]: sshd@22-10.0.0.115:22-10.0.0.1:54402.service: Deactivated successfully. Dec 13 01:38:19.120944 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:38:19.121775 systemd-logind[1459]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:38:19.122805 systemd-logind[1459]: Removed session 23.