Dec 13 01:16:44.867122 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:16:44.867144 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:44.867155 kernel: BIOS-provided physical RAM map: Dec 13 01:16:44.867162 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:16:44.867168 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:16:44.867174 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:16:44.867182 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 01:16:44.867188 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 01:16:44.867194 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:16:44.867203 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:16:44.867209 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:16:44.867215 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:16:44.867222 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 01:16:44.867228 kernel: NX (Execute Disable) protection: active Dec 13 01:16:44.867236 kernel: APIC: Static calls initialized Dec 13 01:16:44.867245 kernel: SMBIOS 2.8 present. Dec 13 01:16:44.867252 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 01:16:44.867258 kernel: Hypervisor detected: KVM Dec 13 01:16:44.867265 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:16:44.867272 kernel: kvm-clock: using sched offset of 2171928622 cycles Dec 13 01:16:44.867279 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:16:44.867286 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:16:44.867293 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:16:44.867301 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:16:44.867308 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 01:16:44.867317 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:16:44.867324 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:16:44.867331 kernel: Using GB pages for direct mapping Dec 13 01:16:44.867338 kernel: ACPI: Early table checksum verification disabled Dec 13 01:16:44.867346 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 01:16:44.867353 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:44.867360 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:44.867367 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:44.867376 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 01:16:44.867383 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:44.867390 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:44.867397 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:44.867404 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:44.867411 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 01:16:44.867418 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 01:16:44.867428 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 01:16:44.867438 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 01:16:44.867445 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 01:16:44.867452 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 01:16:44.867459 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 01:16:44.867466 kernel: No NUMA configuration found Dec 13 01:16:44.867474 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 01:16:44.867481 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 01:16:44.867490 kernel: Zone ranges: Dec 13 01:16:44.867497 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:16:44.867505 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 01:16:44.867512 kernel: Normal empty Dec 13 01:16:44.867519 kernel: Movable zone start for each node Dec 13 01:16:44.867526 kernel: Early memory node ranges Dec 13 01:16:44.867534 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:16:44.867541 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 01:16:44.867548 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 01:16:44.867557 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:16:44.867565 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:16:44.867572 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:16:44.867579 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:16:44.867586 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:16:44.867593 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:16:44.867601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:16:44.867608 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:16:44.867615 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:16:44.867625 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:16:44.867632 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:16:44.867639 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:16:44.867647 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:16:44.867654 kernel: TSC deadline timer available Dec 13 01:16:44.867661 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:16:44.867668 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:16:44.867675 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:16:44.867683 kernel: kvm-guest: setup PV sched yield Dec 13 01:16:44.867690 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:16:44.867699 kernel: Booting paravirtualized kernel on KVM Dec 13 01:16:44.867707 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:16:44.867714 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:16:44.867722 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:16:44.867729 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:16:44.867736 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:16:44.867743 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:16:44.867750 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:16:44.867758 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:44.867768 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:16:44.867776 kernel: random: crng init done Dec 13 01:16:44.867783 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:16:44.867802 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:16:44.867809 kernel: Fallback order for Node 0: 0 Dec 13 01:16:44.867817 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 01:16:44.867824 kernel: Policy zone: DMA32 Dec 13 01:16:44.867831 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:16:44.867841 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Dec 13 01:16:44.867849 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:16:44.867856 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:16:44.867864 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:16:44.867871 kernel: Dynamic Preempt: voluntary Dec 13 01:16:44.867878 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:16:44.867886 kernel: rcu: RCU event tracing is enabled. Dec 13 01:16:44.867894 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:16:44.867901 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:16:44.867911 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:16:44.867918 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:16:44.867926 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:16:44.867933 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:16:44.867940 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:16:44.867947 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:16:44.867954 kernel: Console: colour VGA+ 80x25 Dec 13 01:16:44.867962 kernel: printk: console [ttyS0] enabled Dec 13 01:16:44.867969 kernel: ACPI: Core revision 20230628 Dec 13 01:16:44.867978 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:16:44.867986 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:16:44.867993 kernel: x2apic enabled Dec 13 01:16:44.868000 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:16:44.868008 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:16:44.868015 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:16:44.868023 kernel: kvm-guest: setup PV IPIs Dec 13 01:16:44.868040 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:16:44.868047 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:16:44.868055 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:16:44.868063 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:16:44.868070 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:16:44.868080 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:16:44.868094 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:16:44.868104 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:16:44.868114 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:16:44.868125 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:16:44.868138 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:16:44.868149 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:16:44.868159 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:16:44.868170 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:16:44.868181 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:16:44.868193 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:16:44.868204 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:16:44.868215 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:16:44.868228 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:16:44.868239 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:16:44.868250 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:16:44.868261 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:16:44.868272 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:16:44.868282 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:16:44.868293 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:16:44.868304 kernel: landlock: Up and running. Dec 13 01:16:44.868314 kernel: SELinux: Initializing. Dec 13 01:16:44.868328 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:16:44.868339 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:16:44.868350 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:16:44.868361 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:16:44.868372 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:16:44.868383 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:16:44.868392 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:16:44.868399 kernel: ... version: 0 Dec 13 01:16:44.868407 kernel: ... bit width: 48 Dec 13 01:16:44.868417 kernel: ... generic registers: 6 Dec 13 01:16:44.868424 kernel: ... value mask: 0000ffffffffffff Dec 13 01:16:44.868432 kernel: ... max period: 00007fffffffffff Dec 13 01:16:44.868440 kernel: ... fixed-purpose events: 0 Dec 13 01:16:44.868447 kernel: ... event mask: 000000000000003f Dec 13 01:16:44.868454 kernel: signal: max sigframe size: 1776 Dec 13 01:16:44.868462 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:16:44.868470 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:16:44.868477 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:16:44.868487 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:16:44.868495 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:16:44.868502 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:16:44.868510 kernel: smpboot: Max logical packages: 1 Dec 13 01:16:44.868517 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:16:44.868525 kernel: devtmpfs: initialized Dec 13 01:16:44.868532 kernel: x86/mm: Memory block size: 128MB Dec 13 01:16:44.868540 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:16:44.868548 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:16:44.868558 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:16:44.868565 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:16:44.868573 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:16:44.868581 kernel: audit: type=2000 audit(1734052604.410:1): state=initialized audit_enabled=0 res=1 Dec 13 01:16:44.868588 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:16:44.868596 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:16:44.868603 kernel: cpuidle: using governor menu Dec 13 01:16:44.868611 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:16:44.868619 kernel: dca service started, version 1.12.1 Dec 13 01:16:44.868629 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:16:44.868636 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:16:44.868644 kernel: PCI: Using configuration type 1 for base access Dec 13 01:16:44.868652 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:16:44.868660 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:16:44.868667 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:16:44.868675 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:16:44.868683 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:16:44.868690 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:16:44.868700 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:16:44.868707 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:16:44.868715 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:16:44.868723 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:16:44.868730 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:16:44.868738 kernel: ACPI: Interpreter enabled Dec 13 01:16:44.868745 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:16:44.868753 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:16:44.868761 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:16:44.868771 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:16:44.868778 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:16:44.868786 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:16:44.868991 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:16:44.869128 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:16:44.869250 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:16:44.869260 kernel: PCI host bridge to bus 0000:00 Dec 13 01:16:44.869388 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:16:44.869499 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:16:44.869609 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:16:44.869718 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:16:44.869934 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:16:44.870044 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:16:44.870163 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:16:44.870304 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:16:44.870514 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:16:44.870640 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 01:16:44.870760 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 01:16:44.870895 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 01:16:44.871017 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:16:44.871157 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:16:44.871284 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 01:16:44.871403 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 01:16:44.871522 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 01:16:44.871650 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:16:44.871881 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 01:16:44.872046 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 01:16:44.872217 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 01:16:44.872356 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:16:44.872477 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 01:16:44.872596 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 01:16:44.872715 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 01:16:44.872852 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 01:16:44.872982 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:16:44.873116 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:16:44.873245 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:16:44.873420 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 01:16:44.873541 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 01:16:44.873668 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:16:44.873788 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:16:44.873843 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:16:44.873855 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:16:44.873863 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:16:44.873871 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:16:44.873879 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:16:44.873887 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:16:44.873895 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:16:44.873903 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:16:44.873910 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:16:44.873918 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:16:44.873928 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:16:44.873936 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:16:44.873944 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:16:44.873951 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:16:44.873959 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:16:44.873966 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:16:44.873974 kernel: iommu: Default domain type: Translated Dec 13 01:16:44.873982 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:16:44.873989 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:16:44.873999 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:16:44.874007 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:16:44.874014 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 01:16:44.874143 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:16:44.874308 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:16:44.874430 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:16:44.874440 kernel: vgaarb: loaded Dec 13 01:16:44.874448 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:16:44.874459 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:16:44.874467 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:16:44.874475 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:16:44.874483 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:16:44.874491 kernel: pnp: PnP ACPI init Dec 13 01:16:44.874621 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:16:44.874633 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:16:44.874641 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:16:44.874652 kernel: NET: Registered PF_INET protocol family Dec 13 01:16:44.874659 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:16:44.874667 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:16:44.874675 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:16:44.874683 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:16:44.874691 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:16:44.874698 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:16:44.874706 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:16:44.874714 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:16:44.874724 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:16:44.874732 kernel: NET: Registered PF_XDP protocol family Dec 13 01:16:44.874856 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:16:44.874966 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:16:44.875075 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:16:44.875191 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:16:44.875302 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:16:44.875413 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:16:44.875427 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:16:44.875435 kernel: Initialise system trusted keyrings Dec 13 01:16:44.875443 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:16:44.875450 kernel: Key type asymmetric registered Dec 13 01:16:44.875458 kernel: Asymmetric key parser 'x509' registered Dec 13 01:16:44.875465 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:16:44.875473 kernel: io scheduler mq-deadline registered Dec 13 01:16:44.875481 kernel: io scheduler kyber registered Dec 13 01:16:44.875488 kernel: io scheduler bfq registered Dec 13 01:16:44.875498 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:16:44.875507 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:16:44.875514 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:16:44.875522 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:16:44.875530 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:16:44.875538 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:16:44.875545 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:16:44.875553 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:16:44.875561 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:16:44.875685 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:16:44.875829 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:16:44.875841 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:16:44.875955 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:16:44 UTC (1734052604) Dec 13 01:16:44.876067 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:16:44.876077 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:16:44.876085 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:16:44.876101 kernel: Segment Routing with IPv6 Dec 13 01:16:44.876113 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:16:44.876121 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:16:44.876129 kernel: Key type dns_resolver registered Dec 13 01:16:44.876136 kernel: IPI shorthand broadcast: enabled Dec 13 01:16:44.876144 kernel: sched_clock: Marking stable (557003040, 105625518)->(709913802, -47285244) Dec 13 01:16:44.876151 kernel: registered taskstats version 1 Dec 13 01:16:44.876159 kernel: Loading compiled-in X.509 certificates Dec 13 01:16:44.876167 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:16:44.876175 kernel: Key type .fscrypt registered Dec 13 01:16:44.876185 kernel: Key type fscrypt-provisioning registered Dec 13 01:16:44.876193 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:16:44.876200 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:16:44.876208 kernel: ima: No architecture policies found Dec 13 01:16:44.876215 kernel: clk: Disabling unused clocks Dec 13 01:16:44.876223 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:16:44.876231 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:16:44.876238 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:16:44.876246 kernel: Run /init as init process Dec 13 01:16:44.876256 kernel: with arguments: Dec 13 01:16:44.876264 kernel: /init Dec 13 01:16:44.876271 kernel: with environment: Dec 13 01:16:44.876279 kernel: HOME=/ Dec 13 01:16:44.876286 kernel: TERM=linux Dec 13 01:16:44.876294 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:16:44.876303 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:44.876313 systemd[1]: Detected virtualization kvm. Dec 13 01:16:44.876324 systemd[1]: Detected architecture x86-64. Dec 13 01:16:44.876332 systemd[1]: Running in initrd. Dec 13 01:16:44.876339 systemd[1]: No hostname configured, using default hostname. Dec 13 01:16:44.876347 systemd[1]: Hostname set to . Dec 13 01:16:44.876356 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:16:44.876364 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:16:44.876372 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:44.876380 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:44.876391 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:16:44.876411 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:44.876423 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:16:44.876432 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:16:44.876442 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:16:44.876453 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:16:44.876462 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:44.876470 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:44.876478 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:44.876487 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:44.876495 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:44.876503 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:44.876512 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:44.876522 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:44.876531 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:16:44.876539 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:16:44.876548 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:44.876556 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:44.876564 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:44.876573 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:44.876581 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:16:44.876589 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:44.876600 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:16:44.876608 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:16:44.876617 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:44.876625 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:44.876634 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:44.876642 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:44.876651 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:44.876659 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:16:44.876691 systemd-journald[192]: Collecting audit messages is disabled. Dec 13 01:16:44.876713 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:16:44.876725 systemd-journald[192]: Journal started Dec 13 01:16:44.876745 systemd-journald[192]: Runtime Journal (/run/log/journal/4c91c1f34772478da813dc1382e9d056) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:16:44.872174 systemd-modules-load[193]: Inserted module 'overlay' Dec 13 01:16:44.906140 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:44.906167 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:16:44.906181 kernel: Bridge firewalling registered Dec 13 01:16:44.898614 systemd-modules-load[193]: Inserted module 'br_netfilter' Dec 13 01:16:44.904925 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:44.906717 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:44.912994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:44.914067 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:44.916526 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:44.925907 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:16:44.929968 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:44.931235 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:44.931980 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:44.933528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:44.937103 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:16:44.939931 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:44.950363 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:44.954910 dracut-cmdline[225]: dracut-dracut-053 Dec 13 01:16:44.958488 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:44.972334 systemd-resolved[226]: Positive Trust Anchors: Dec 13 01:16:44.972346 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:44.972377 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:44.974852 systemd-resolved[226]: Defaulting to hostname 'linux'. Dec 13 01:16:44.975840 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:44.981260 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:45.059822 kernel: SCSI subsystem initialized Dec 13 01:16:45.068816 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:16:45.078815 kernel: iscsi: registered transport (tcp) Dec 13 01:16:45.099829 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:16:45.099900 kernel: QLogic iSCSI HBA Driver Dec 13 01:16:45.151169 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:45.158921 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:16:45.182846 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:16:45.182873 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:16:45.183881 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:16:45.224816 kernel: raid6: avx2x4 gen() 30669 MB/s Dec 13 01:16:45.241819 kernel: raid6: avx2x2 gen() 31494 MB/s Dec 13 01:16:45.258890 kernel: raid6: avx2x1 gen() 26128 MB/s Dec 13 01:16:45.258914 kernel: raid6: using algorithm avx2x2 gen() 31494 MB/s Dec 13 01:16:45.276906 kernel: raid6: .... xor() 20007 MB/s, rmw enabled Dec 13 01:16:45.276933 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:16:45.296818 kernel: xor: automatically using best checksumming function avx Dec 13 01:16:45.450829 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:16:45.463696 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:45.475937 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:45.490892 systemd-udevd[411]: Using default interface naming scheme 'v255'. Dec 13 01:16:45.496361 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:45.504933 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:16:45.518067 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Dec 13 01:16:45.550636 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:45.564020 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:45.623369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:45.628979 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:16:45.652825 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:16:45.686332 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:16:45.686525 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:16:45.686541 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:16:45.686557 kernel: GPT:9289727 != 19775487 Dec 13 01:16:45.686572 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:16:45.686586 kernel: GPT:9289727 != 19775487 Dec 13 01:16:45.686608 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:16:45.686622 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:45.653596 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:45.658505 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:45.660063 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:45.661506 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:45.669975 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:16:45.680909 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:45.681016 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:45.694140 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:45.697651 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:45.698002 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:45.700831 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:45.709103 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:16:45.709149 kernel: libata version 3.00 loaded. Dec 13 01:16:45.711116 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:45.714006 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:45.719570 kernel: AES CTR mode by8 optimization enabled Dec 13 01:16:45.719599 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Dec 13 01:16:45.721825 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (467) Dec 13 01:16:45.724839 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:16:45.740188 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:16:45.740209 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:16:45.740833 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:16:45.740984 kernel: scsi host0: ahci Dec 13 01:16:45.741144 kernel: scsi host1: ahci Dec 13 01:16:45.741284 kernel: scsi host2: ahci Dec 13 01:16:45.741435 kernel: scsi host3: ahci Dec 13 01:16:45.741617 kernel: scsi host4: ahci Dec 13 01:16:45.741761 kernel: scsi host5: ahci Dec 13 01:16:45.741953 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 01:16:45.741966 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 01:16:45.741977 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 01:16:45.741987 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 01:16:45.742001 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 01:16:45.742011 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 01:16:45.738573 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:16:45.771311 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:45.788836 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:16:45.794006 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:16:45.798331 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:16:45.798615 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:16:45.814934 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:16:45.817923 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:45.839482 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:45.852275 disk-uuid[553]: Primary Header is updated. Dec 13 01:16:45.852275 disk-uuid[553]: Secondary Entries is updated. Dec 13 01:16:45.852275 disk-uuid[553]: Secondary Header is updated. Dec 13 01:16:45.856831 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:45.860822 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:46.051826 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:16:46.051901 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:16:46.051916 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:16:46.052816 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:16:46.053823 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:16:46.054822 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:16:46.055824 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:16:46.055838 kernel: ata3.00: applying bridge limits Dec 13 01:16:46.056825 kernel: ata3.00: configured for UDMA/100 Dec 13 01:16:46.056912 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:16:46.101823 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:16:46.115697 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:16:46.115714 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:16:46.861817 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:46.862286 disk-uuid[562]: The operation has completed successfully. Dec 13 01:16:46.886719 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:16:46.886858 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:16:46.913012 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:16:46.916004 sh[589]: Success Dec 13 01:16:46.927834 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:16:46.960032 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:16:46.973255 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:16:46.977673 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:16:46.989745 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:16:46.989772 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:46.989784 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:16:46.989809 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:16:46.990532 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:16:46.995193 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:16:46.996700 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:16:47.009995 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:16:47.011730 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:16:47.020457 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:47.020485 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:47.020496 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:16:47.023822 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:16:47.032217 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:16:47.033971 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:47.043430 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:16:47.049974 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:16:47.099324 ignition[681]: Ignition 2.19.0 Dec 13 01:16:47.099336 ignition[681]: Stage: fetch-offline Dec 13 01:16:47.099373 ignition[681]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:47.099385 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:47.099496 ignition[681]: parsed url from cmdline: "" Dec 13 01:16:47.099500 ignition[681]: no config URL provided Dec 13 01:16:47.099506 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:16:47.099515 ignition[681]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:16:47.099544 ignition[681]: op(1): [started] loading QEMU firmware config module Dec 13 01:16:47.099549 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:16:47.108837 ignition[681]: op(1): [finished] loading QEMU firmware config module Dec 13 01:16:47.122844 ignition[681]: parsing config with SHA512: 60203c3469aef00ab6989571ab1ffeb54a1d93155ba98a0036b781bb587d6813c5a249d2e762621584a04e6741b2eb0e3a1bd222c40ca9e6bcfb1e085fbcc688 Dec 13 01:16:47.126290 unknown[681]: fetched base config from "system" Dec 13 01:16:47.126304 unknown[681]: fetched user config from "qemu" Dec 13 01:16:47.128135 ignition[681]: fetch-offline: fetch-offline passed Dec 13 01:16:47.126425 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:47.128221 ignition[681]: Ignition finished successfully Dec 13 01:16:47.141975 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:47.144223 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:47.161857 systemd-networkd[777]: lo: Link UP Dec 13 01:16:47.161867 systemd-networkd[777]: lo: Gained carrier Dec 13 01:16:47.163379 systemd-networkd[777]: Enumeration completed Dec 13 01:16:47.163765 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:47.163769 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:47.164454 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:47.165031 systemd-networkd[777]: eth0: Link UP Dec 13 01:16:47.165035 systemd-networkd[777]: eth0: Gained carrier Dec 13 01:16:47.165042 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:47.173081 systemd[1]: Reached target network.target - Network. Dec 13 01:16:47.174808 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:16:47.185936 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:16:47.191866 systemd-networkd[777]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:16:47.198972 ignition[781]: Ignition 2.19.0 Dec 13 01:16:47.198983 ignition[781]: Stage: kargs Dec 13 01:16:47.199157 ignition[781]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:47.199168 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:47.202868 ignition[781]: kargs: kargs passed Dec 13 01:16:47.202912 ignition[781]: Ignition finished successfully Dec 13 01:16:47.207163 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:16:47.218174 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:16:47.232532 ignition[790]: Ignition 2.19.0 Dec 13 01:16:47.232542 ignition[790]: Stage: disks Dec 13 01:16:47.232702 ignition[790]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:47.232714 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:47.233589 ignition[790]: disks: disks passed Dec 13 01:16:47.233628 ignition[790]: Ignition finished successfully Dec 13 01:16:47.239299 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:16:47.241395 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:47.241854 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:16:47.242177 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:47.242554 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:47.243038 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:47.257940 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:16:47.269698 systemd-fsck[800]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:16:47.275848 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:16:47.290868 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:16:47.377822 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:16:47.378633 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:16:47.379601 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:16:47.389863 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:47.391158 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:16:47.392194 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:16:47.392230 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:16:47.392249 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:47.399743 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:16:47.402919 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:16:47.405369 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (808) Dec 13 01:16:47.407403 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:47.407449 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:47.407460 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:16:47.409809 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:16:47.411659 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:47.439940 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:16:47.444684 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:16:47.448457 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:16:47.452267 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:16:47.534266 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:47.540879 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:16:47.543381 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:16:47.549816 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:47.566901 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:16:47.568858 ignition[921]: INFO : Ignition 2.19.0 Dec 13 01:16:47.568858 ignition[921]: INFO : Stage: mount Dec 13 01:16:47.570528 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:47.570528 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:47.573656 ignition[921]: INFO : mount: mount passed Dec 13 01:16:47.573656 ignition[921]: INFO : Ignition finished successfully Dec 13 01:16:47.572228 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:16:47.578952 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:16:47.988165 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:16:48.000928 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:48.008372 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (935) Dec 13 01:16:48.008397 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:48.008409 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:48.009242 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:16:48.012811 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:16:48.013712 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:48.037907 ignition[952]: INFO : Ignition 2.19.0 Dec 13 01:16:48.037907 ignition[952]: INFO : Stage: files Dec 13 01:16:48.039581 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:48.039581 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:48.042330 ignition[952]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:16:48.043639 ignition[952]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:16:48.043639 ignition[952]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:16:48.047267 ignition[952]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:16:48.048722 ignition[952]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:16:48.050112 ignition[952]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:16:48.049184 unknown[952]: wrote ssh authorized keys file for user: core Dec 13 01:16:48.052947 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:16:48.052947 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:16:48.093280 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:16:48.181113 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:16:48.181113 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:48.184809 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:48.184809 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:48.188123 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:48.189784 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:48.191512 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:48.193177 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:48.194887 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:48.196761 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:48.198582 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:48.200301 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:16:48.202757 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:16:48.205144 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:16:48.207195 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:16:48.688227 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:16:48.949001 systemd-networkd[777]: eth0: Gained IPv6LL Dec 13 01:16:49.075137 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:16:49.075137 ignition[952]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:16:49.078893 ignition[952]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:49.081077 ignition[952]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:49.081077 ignition[952]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:16:49.081077 ignition[952]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:16:49.085369 ignition[952]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:16:49.087263 ignition[952]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:16:49.087263 ignition[952]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:16:49.090400 ignition[952]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:16:49.110833 ignition[952]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:16:49.115673 ignition[952]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:16:49.117434 ignition[952]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:16:49.117434 ignition[952]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:49.120224 ignition[952]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:49.121667 ignition[952]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:49.123467 ignition[952]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:49.125157 ignition[952]: INFO : files: files passed Dec 13 01:16:49.125900 ignition[952]: INFO : Ignition finished successfully Dec 13 01:16:49.127990 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:16:49.145913 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:16:49.148770 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:16:49.151465 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:16:49.152434 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:16:49.158648 initrd-setup-root-after-ignition[980]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:16:49.162852 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:49.164600 initrd-setup-root-after-ignition[982]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:49.166472 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:49.169602 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:49.170617 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:16:49.179952 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:16:49.203369 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:16:49.203494 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:16:49.204386 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:16:49.207056 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:16:49.207416 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:16:49.208139 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:16:49.226332 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:49.239965 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:16:49.250464 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:49.251123 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:49.253113 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:16:49.253419 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:16:49.253520 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:49.258496 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:16:49.259259 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:16:49.259583 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:16:49.260090 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:49.260422 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:49.260755 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:16:49.261260 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:49.261597 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:16:49.262099 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:16:49.262424 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:16:49.262727 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:16:49.262846 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:49.263581 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:49.264095 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:49.264390 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:16:49.264719 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:49.285056 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:16:49.285161 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:49.287429 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:16:49.287549 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:49.290199 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:16:49.290436 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:16:49.296858 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:49.299601 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:16:49.301513 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:16:49.303400 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:16:49.304295 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:49.306255 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:16:49.307165 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:49.309235 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:16:49.310427 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:49.312957 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:16:49.313958 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:16:49.326926 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:16:49.328790 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:16:49.329850 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:49.333010 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:16:49.334775 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:16:49.334972 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:49.338342 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:16:49.339400 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:49.341545 ignition[1007]: INFO : Ignition 2.19.0 Dec 13 01:16:49.341545 ignition[1007]: INFO : Stage: umount Dec 13 01:16:49.341545 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:49.341545 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:49.341545 ignition[1007]: INFO : umount: umount passed Dec 13 01:16:49.341545 ignition[1007]: INFO : Ignition finished successfully Dec 13 01:16:49.348200 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:16:49.349195 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:16:49.353133 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:16:49.354243 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:16:49.357267 systemd[1]: Stopped target network.target - Network. Dec 13 01:16:49.359205 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:16:49.360200 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:16:49.362222 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:16:49.362275 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:16:49.365055 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:16:49.365109 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:16:49.368029 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:16:49.368082 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:49.371226 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:16:49.373512 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:16:49.375869 systemd-networkd[777]: eth0: DHCPv6 lease lost Dec 13 01:16:49.376616 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:16:49.378984 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:16:49.380047 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:16:49.382407 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:16:49.383399 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:16:49.387565 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:16:49.387617 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:49.400893 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:16:49.402750 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:16:49.402816 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:49.406232 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:16:49.406285 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:49.409256 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:16:49.409310 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:49.411673 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:16:49.411721 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:49.416210 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:49.429343 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:16:49.430412 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:16:49.439499 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:16:49.440626 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:49.443370 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:16:49.443427 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:49.446544 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:16:49.446588 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:49.449569 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:16:49.449621 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:49.452709 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:16:49.452762 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:49.455669 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:49.455719 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:49.469903 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:16:49.471021 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:16:49.472206 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:49.475636 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:49.475690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:49.479169 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:16:49.480312 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:16:49.527383 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:16:49.528397 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:16:49.530382 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:16:49.532407 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:16:49.532457 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:49.545909 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:16:49.554304 systemd[1]: Switching root. Dec 13 01:16:49.588232 systemd-journald[192]: Journal stopped Dec 13 01:16:50.777403 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Dec 13 01:16:50.777481 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:16:50.777500 kernel: SELinux: policy capability open_perms=1 Dec 13 01:16:50.777517 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:16:50.777535 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:16:50.777547 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:16:50.777558 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:16:50.777579 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:16:50.777590 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:16:50.777601 kernel: audit: type=1403 audit(1734052610.049:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:16:50.777613 systemd[1]: Successfully loaded SELinux policy in 42.419ms. Dec 13 01:16:50.777634 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.997ms. Dec 13 01:16:50.777647 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:50.777659 systemd[1]: Detected virtualization kvm. Dec 13 01:16:50.777671 systemd[1]: Detected architecture x86-64. Dec 13 01:16:50.777685 systemd[1]: Detected first boot. Dec 13 01:16:50.777697 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:16:50.777709 zram_generator::config[1054]: No configuration found. Dec 13 01:16:50.777723 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:16:50.777735 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:16:50.777746 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:16:50.777758 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:16:50.777771 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:16:50.777784 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:16:50.777812 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:16:50.777824 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:16:50.777836 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:16:50.777848 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:16:50.777860 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:16:50.777871 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:16:50.777883 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:50.777895 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:50.777908 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:16:50.777922 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:16:50.777934 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:16:50.777946 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:50.777958 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:16:50.777970 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:50.777982 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:16:50.778000 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:16:50.778012 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:16:50.778026 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:16:50.778038 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:50.778053 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:50.778076 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:50.778090 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:50.778105 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:16:50.778122 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:16:50.778138 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:50.778153 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:50.778164 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:50.778176 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:16:50.778188 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:16:50.778200 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:16:50.778211 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:16:50.778223 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:50.778235 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:16:50.778247 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:16:50.778262 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:16:50.778275 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:16:50.778287 systemd[1]: Reached target machines.target - Containers. Dec 13 01:16:50.778298 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:16:50.778310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:50.778322 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:50.778334 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:16:50.778351 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:50.778365 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:16:50.778377 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:50.778389 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:16:50.778400 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:50.778412 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:16:50.778425 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:16:50.778436 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:16:50.778448 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:16:50.778460 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:16:50.778474 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:50.778486 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:50.778499 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:16:50.778511 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:16:50.778522 kernel: fuse: init (API version 7.39) Dec 13 01:16:50.778534 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:50.778546 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:16:50.778558 systemd[1]: Stopped verity-setup.service. Dec 13 01:16:50.778570 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:50.778584 kernel: loop: module loaded Dec 13 01:16:50.778596 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:16:50.778608 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:16:50.778620 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:16:50.778632 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:16:50.778662 systemd-journald[1114]: Collecting audit messages is disabled. Dec 13 01:16:50.778689 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:16:50.778701 systemd-journald[1114]: Journal started Dec 13 01:16:50.778722 systemd-journald[1114]: Runtime Journal (/run/log/journal/4c91c1f34772478da813dc1382e9d056) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:16:50.543434 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:16:50.561154 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:16:50.561570 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:16:50.780924 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:50.782406 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:16:50.783670 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:50.785267 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:16:50.785506 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:16:50.787010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:50.787179 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:50.788600 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:50.788773 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:50.790301 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:16:50.790469 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:16:50.791866 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:50.792042 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:50.793416 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:50.794820 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:16:50.796340 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:16:50.800130 kernel: ACPI: bus type drm_connector registered Dec 13 01:16:50.800810 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:16:50.801005 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:16:50.814372 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:16:50.836877 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:16:50.839130 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:16:50.840357 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:16:50.840384 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:50.842364 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:16:50.844662 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:16:50.846931 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:16:50.848057 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:50.850884 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:16:50.856116 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:16:50.857640 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:50.864004 systemd-journald[1114]: Time spent on flushing to /var/log/journal/4c91c1f34772478da813dc1382e9d056 is 16.527ms for 944 entries. Dec 13 01:16:50.864004 systemd-journald[1114]: System Journal (/var/log/journal/4c91c1f34772478da813dc1382e9d056) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:16:50.926123 systemd-journald[1114]: Received client request to flush runtime journal. Dec 13 01:16:50.926156 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 01:16:50.926178 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:16:50.861970 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:16:50.863135 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:50.865109 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:50.868034 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:16:50.871095 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:50.872497 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:16:50.873832 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:16:50.875329 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:16:50.896032 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:16:50.901856 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:50.905424 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:16:50.913832 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:16:50.915274 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:16:50.923953 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:16:50.928541 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:16:50.942704 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:16:50.950841 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 01:16:50.950979 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:16:50.953443 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:16:50.954306 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:16:50.979817 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 01:16:50.982589 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:16:50.994087 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:51.018592 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Dec 13 01:16:51.018611 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Dec 13 01:16:51.023456 kernel: loop3: detected capacity change from 0 to 210664 Dec 13 01:16:51.025563 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:51.035840 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:16:51.046833 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 01:16:51.057259 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:16:51.057943 (sd-merge)[1188]: Merged extensions into '/usr'. Dec 13 01:16:51.062439 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:16:51.062456 systemd[1]: Reloading... Dec 13 01:16:51.112818 zram_generator::config[1214]: No configuration found. Dec 13 01:16:51.187087 ldconfig[1145]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:16:51.238536 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:16:51.287372 systemd[1]: Reloading finished in 224 ms. Dec 13 01:16:51.321218 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:16:51.322717 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:16:51.336011 systemd[1]: Starting ensure-sysext.service... Dec 13 01:16:51.338126 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:51.346440 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:16:51.346455 systemd[1]: Reloading... Dec 13 01:16:51.359164 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:16:51.359523 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:16:51.360505 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:16:51.360825 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Dec 13 01:16:51.360908 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Dec 13 01:16:51.364454 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:16:51.364465 systemd-tmpfiles[1253]: Skipping /boot Dec 13 01:16:51.375336 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:16:51.375396 systemd-tmpfiles[1253]: Skipping /boot Dec 13 01:16:51.405838 zram_generator::config[1283]: No configuration found. Dec 13 01:16:51.506333 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:16:51.554825 systemd[1]: Reloading finished in 208 ms. Dec 13 01:16:51.575069 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:16:51.588233 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:51.597193 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:51.599899 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:16:51.602354 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:16:51.605626 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:51.609193 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:51.613102 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:16:51.619263 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:51.619432 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:51.620869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:51.624328 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:51.628154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:51.629460 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:51.635283 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:16:51.636335 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:51.637577 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:51.637777 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:51.639966 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:51.640383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:51.642445 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:51.642718 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:51.647416 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Dec 13 01:16:51.651030 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:16:51.653924 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:51.654206 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:51.662607 augenrules[1349]: No rules Dec 13 01:16:51.664097 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:16:51.665981 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:51.671302 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:16:51.674016 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:51.676442 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:16:51.681507 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:51.681686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:51.692548 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:51.698327 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:51.701073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:51.702812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:51.705043 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:51.707846 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:51.708607 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:16:51.710370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:51.710554 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:51.712314 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:16:51.714038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:51.714204 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:51.728083 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:51.730896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:51.734593 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:16:51.735708 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:51.737042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:51.738820 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1366) Dec 13 01:16:51.747658 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1366) Dec 13 01:16:51.742936 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:51.747939 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:16:51.753047 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:51.754435 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:51.754494 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:16:51.754517 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:51.755053 systemd[1]: Finished ensure-sysext.service. Dec 13 01:16:51.757055 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:51.757230 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:51.776957 systemd-resolved[1323]: Positive Trust Anchors: Dec 13 01:16:51.776982 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:51.777013 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:51.777706 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:16:51.777906 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:16:51.778876 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1370) Dec 13 01:16:51.780775 systemd-resolved[1323]: Defaulting to hostname 'linux'. Dec 13 01:16:51.784093 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:51.786931 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:51.787141 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:51.797082 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:51.798371 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:51.798419 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:51.799864 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:16:51.808969 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:16:51.811815 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:16:51.813319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:16:51.818995 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:16:51.826942 systemd-networkd[1381]: lo: Link UP Dec 13 01:16:51.831961 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:16:51.832183 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:16:51.841320 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:16:51.842968 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:16:51.826955 systemd-networkd[1381]: lo: Gained carrier Dec 13 01:16:51.828610 systemd-networkd[1381]: Enumeration completed Dec 13 01:16:51.831902 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:51.833126 systemd[1]: Reached target network.target - Network. Dec 13 01:16:51.834134 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:51.834139 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:51.836035 systemd-networkd[1381]: eth0: Link UP Dec 13 01:16:51.836039 systemd-networkd[1381]: eth0: Gained carrier Dec 13 01:16:51.836051 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:51.843088 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:16:51.852010 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:16:51.852323 systemd-networkd[1381]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:16:51.879351 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:16:52.385229 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:16:52.385272 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:16:52.385313 systemd-timesyncd[1404]: Initial clock synchronization to Fri 2024-12-13 01:16:52.385179 UTC. Dec 13 01:16:52.385659 systemd-resolved[1323]: Clock change detected. Flushing caches. Dec 13 01:16:52.403103 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:52.411758 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:16:52.476759 kernel: kvm_amd: TSC scaling supported Dec 13 01:16:52.476815 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:16:52.476842 kernel: kvm_amd: Nested Paging enabled Dec 13 01:16:52.476861 kernel: kvm_amd: LBR virtualization supported Dec 13 01:16:52.476883 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:16:52.476912 kernel: kvm_amd: Virtual GIF supported Dec 13 01:16:52.492709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:52.496761 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:16:52.528175 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:16:52.541878 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:16:52.550338 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:16:52.584171 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:16:52.585749 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:52.586941 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:52.588146 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:16:52.589424 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:16:52.590917 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:16:52.592280 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:16:52.593538 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:16:52.594801 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:16:52.594828 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:52.595753 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:52.597514 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:16:52.600341 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:16:52.609397 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:16:52.611889 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:16:52.613484 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:16:52.614654 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:52.615646 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:52.616635 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:16:52.616665 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:16:52.617613 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:16:52.619662 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:16:52.624780 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:16:52.627881 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:16:52.629082 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:16:52.630782 jq[1429]: false Dec 13 01:16:52.631328 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:16:52.632930 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:16:52.635892 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:16:52.641883 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:16:52.645025 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:16:52.647919 dbus-daemon[1428]: [system] SELinux support is enabled Dec 13 01:16:52.655671 extend-filesystems[1430]: Found loop3 Dec 13 01:16:52.661001 extend-filesystems[1430]: Found loop4 Dec 13 01:16:52.661001 extend-filesystems[1430]: Found loop5 Dec 13 01:16:52.661001 extend-filesystems[1430]: Found sr0 Dec 13 01:16:52.661001 extend-filesystems[1430]: Found vda Dec 13 01:16:52.661001 extend-filesystems[1430]: Found vda1 Dec 13 01:16:52.661001 extend-filesystems[1430]: Found vda2 Dec 13 01:16:52.661001 extend-filesystems[1430]: Found vda3 Dec 13 01:16:52.661001 extend-filesystems[1430]: Found usr Dec 13 01:16:52.661001 extend-filesystems[1430]: Found vda4 Dec 13 01:16:52.661001 extend-filesystems[1430]: Found vda6 Dec 13 01:16:52.661001 extend-filesystems[1430]: Found vda7 Dec 13 01:16:52.661001 extend-filesystems[1430]: Found vda9 Dec 13 01:16:52.661001 extend-filesystems[1430]: Checking size of /dev/vda9 Dec 13 01:16:52.720643 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:16:52.741940 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:16:52.741974 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1374) Dec 13 01:16:52.658861 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:16:52.742089 extend-filesystems[1430]: Resized partition /dev/vda9 Dec 13 01:16:52.659563 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:16:52.745233 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:16:52.660025 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:16:52.749097 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:16:52.749097 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:16:52.749097 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:16:52.662924 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:16:52.757638 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Dec 13 01:16:52.667936 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:16:52.763798 update_engine[1444]: I20241213 01:16:52.725324 1444 main.cc:92] Flatcar Update Engine starting Dec 13 01:16:52.763798 update_engine[1444]: I20241213 01:16:52.728956 1444 update_check_scheduler.cc:74] Next update check in 11m38s Dec 13 01:16:52.670593 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:16:52.764864 jq[1448]: true Dec 13 01:16:52.678165 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:16:52.678464 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:16:52.765208 tar[1452]: linux-amd64/helm Dec 13 01:16:52.678986 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:16:52.680958 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:16:52.768276 jq[1455]: true Dec 13 01:16:52.681301 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:16:52.684288 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:16:52.768554 bash[1475]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:16:52.684505 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:16:52.694722 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:16:52.701118 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:16:52.701160 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:16:52.705988 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:16:52.706009 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:16:52.729227 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:16:52.740209 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:16:52.753604 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:16:52.754084 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:16:52.765571 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:16:52.765594 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:16:52.766630 systemd-logind[1438]: New seat seat0. Dec 13 01:16:52.767609 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:16:52.775628 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:16:52.799641 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:16:52.813737 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:16:52.916353 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:16:52.932155 containerd[1454]: time="2024-12-13T01:16:52.932055211Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:16:52.939033 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:16:52.945942 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:16:52.953918 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:16:52.954298 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:16:52.957631 containerd[1454]: time="2024-12-13T01:16:52.957584237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:52.962863 containerd[1454]: time="2024-12-13T01:16:52.960802723Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:52.962863 containerd[1454]: time="2024-12-13T01:16:52.962834624Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:16:52.963022 containerd[1454]: time="2024-12-13T01:16:52.962954508Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:16:52.963707 containerd[1454]: time="2024-12-13T01:16:52.963649562Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:16:52.963856 containerd[1454]: time="2024-12-13T01:16:52.963770639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:52.963943 containerd[1454]: time="2024-12-13T01:16:52.963926130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:52.964007 containerd[1454]: time="2024-12-13T01:16:52.963994869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:52.964311 containerd[1454]: time="2024-12-13T01:16:52.964292758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:52.964380 containerd[1454]: time="2024-12-13T01:16:52.964366366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:52.964514 containerd[1454]: time="2024-12-13T01:16:52.964416159Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:52.964514 containerd[1454]: time="2024-12-13T01:16:52.964445655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:52.964654 containerd[1454]: time="2024-12-13T01:16:52.964627766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:52.965016 containerd[1454]: time="2024-12-13T01:16:52.965000475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:52.965243 containerd[1454]: time="2024-12-13T01:16:52.965216931Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:52.965306 containerd[1454]: time="2024-12-13T01:16:52.965292453Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:16:52.965632 containerd[1454]: time="2024-12-13T01:16:52.965423469Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:16:52.965632 containerd[1454]: time="2024-12-13T01:16:52.965479353Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:16:52.967086 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:16:52.972049 containerd[1454]: time="2024-12-13T01:16:52.972030659Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972123233Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972142008Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972156786Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972170983Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972289785Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972492916Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972587474Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972600217Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972613422Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972640483Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972652826Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972663977Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972676541Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:16:52.974434 containerd[1454]: time="2024-12-13T01:16:52.972690326Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972703571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972720203Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972748646Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972768433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972780846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972792708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972804461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972816112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972835068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972847842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972861417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972873700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972887126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974691 containerd[1454]: time="2024-12-13T01:16:52.972908896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.972921881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.972932982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.972949322Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.972966665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.972977595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.972987634Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.973034111Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.973049179Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.973059178Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.973070219Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.973079256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.973090717Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.973100686Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:16:52.974967 containerd[1454]: time="2024-12-13T01:16:52.973110835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:16:52.975208 containerd[1454]: time="2024-12-13T01:16:52.973383286Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:16:52.975208 containerd[1454]: time="2024-12-13T01:16:52.973436305Z" level=info msg="Connect containerd service" Dec 13 01:16:52.975208 containerd[1454]: time="2024-12-13T01:16:52.973468766Z" level=info msg="using legacy CRI server" Dec 13 01:16:52.975208 containerd[1454]: time="2024-12-13T01:16:52.973474597Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:16:52.975208 containerd[1454]: time="2024-12-13T01:16:52.973560007Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:16:52.975208 containerd[1454]: time="2024-12-13T01:16:52.974102094Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:16:52.975781 containerd[1454]: time="2024-12-13T01:16:52.974416844Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:16:52.975904 containerd[1454]: time="2024-12-13T01:16:52.975873175Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:16:52.976788 containerd[1454]: time="2024-12-13T01:16:52.976574070Z" level=info msg="Start subscribing containerd event" Dec 13 01:16:52.976866 containerd[1454]: time="2024-12-13T01:16:52.976847683Z" level=info msg="Start recovering state" Dec 13 01:16:52.976976 containerd[1454]: time="2024-12-13T01:16:52.976962198Z" level=info msg="Start event monitor" Dec 13 01:16:52.977032 containerd[1454]: time="2024-12-13T01:16:52.977020677Z" level=info msg="Start snapshots syncer" Dec 13 01:16:52.977075 containerd[1454]: time="2024-12-13T01:16:52.977064039Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:16:52.977131 containerd[1454]: time="2024-12-13T01:16:52.977119673Z" level=info msg="Start streaming server" Dec 13 01:16:52.977624 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:16:52.979260 containerd[1454]: time="2024-12-13T01:16:52.979112080Z" level=info msg="containerd successfully booted in 0.048627s" Dec 13 01:16:52.979782 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:16:52.990055 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:16:52.992422 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:16:52.993678 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:16:53.131794 tar[1452]: linux-amd64/LICENSE Dec 13 01:16:53.131794 tar[1452]: linux-amd64/README.md Dec 13 01:16:53.147840 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:16:53.173076 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:16:53.175398 systemd[1]: Started sshd@0-10.0.0.147:22-10.0.0.1:56134.service - OpenSSH per-connection server daemon (10.0.0.1:56134). Dec 13 01:16:53.218625 sshd[1519]: Accepted publickey for core from 10.0.0.1 port 56134 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:16:53.220956 sshd[1519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:53.229600 systemd-logind[1438]: New session 1 of user core. Dec 13 01:16:53.230877 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:16:53.247076 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:16:53.259188 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:16:53.275957 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:16:53.280010 (systemd)[1523]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:16:53.385687 systemd[1523]: Queued start job for default target default.target. Dec 13 01:16:53.394948 systemd[1523]: Created slice app.slice - User Application Slice. Dec 13 01:16:53.394971 systemd[1523]: Reached target paths.target - Paths. Dec 13 01:16:53.394984 systemd[1523]: Reached target timers.target - Timers. Dec 13 01:16:53.396485 systemd[1523]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:16:53.409163 systemd[1523]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:16:53.409285 systemd[1523]: Reached target sockets.target - Sockets. Dec 13 01:16:53.409302 systemd[1523]: Reached target basic.target - Basic System. Dec 13 01:16:53.409336 systemd[1523]: Reached target default.target - Main User Target. Dec 13 01:16:53.409367 systemd[1523]: Startup finished in 122ms. Dec 13 01:16:53.409934 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:16:53.412653 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:16:53.478792 systemd[1]: Started sshd@1-10.0.0.147:22-10.0.0.1:56136.service - OpenSSH per-connection server daemon (10.0.0.1:56136). Dec 13 01:16:53.513617 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 56136 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:16:53.515113 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:53.518941 systemd-logind[1438]: New session 2 of user core. Dec 13 01:16:53.530849 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:16:53.584719 sshd[1534]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:53.597109 systemd[1]: sshd@1-10.0.0.147:22-10.0.0.1:56136.service: Deactivated successfully. Dec 13 01:16:53.598788 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:16:53.600277 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:16:53.601440 systemd[1]: Started sshd@2-10.0.0.147:22-10.0.0.1:56144.service - OpenSSH per-connection server daemon (10.0.0.1:56144). Dec 13 01:16:53.603400 systemd-logind[1438]: Removed session 2. Dec 13 01:16:53.632589 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 56144 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:16:53.634222 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:53.640656 systemd-logind[1438]: New session 3 of user core. Dec 13 01:16:53.656852 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:16:53.713503 sshd[1541]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:53.717399 systemd[1]: sshd@2-10.0.0.147:22-10.0.0.1:56144.service: Deactivated successfully. Dec 13 01:16:53.719219 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:16:53.720020 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:16:53.720805 systemd-logind[1438]: Removed session 3. Dec 13 01:16:54.124901 systemd-networkd[1381]: eth0: Gained IPv6LL Dec 13 01:16:54.128141 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:16:54.130023 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:16:54.147032 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:16:54.149429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:54.151602 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:16:54.169922 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:16:54.170507 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:16:54.172163 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:16:54.177244 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:16:54.754786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:54.756430 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:16:54.757624 systemd[1]: Startup finished in 685ms (kernel) + 5.357s (initrd) + 4.245s (userspace) = 10.287s. Dec 13 01:16:54.779139 (kubelet)[1569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:16:55.220617 kubelet[1569]: E1213 01:16:55.220468 1569 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:16:55.224703 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:16:55.224927 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:17:03.724393 systemd[1]: Started sshd@3-10.0.0.147:22-10.0.0.1:59550.service - OpenSSH per-connection server daemon (10.0.0.1:59550). Dec 13 01:17:03.755279 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 59550 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:17:03.756578 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:03.760388 systemd-logind[1438]: New session 4 of user core. Dec 13 01:17:03.769958 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:17:03.822702 sshd[1583]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:03.833257 systemd[1]: sshd@3-10.0.0.147:22-10.0.0.1:59550.service: Deactivated successfully. Dec 13 01:17:03.835109 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:17:03.836586 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:17:03.849076 systemd[1]: Started sshd@4-10.0.0.147:22-10.0.0.1:59556.service - OpenSSH per-connection server daemon (10.0.0.1:59556). Dec 13 01:17:03.850162 systemd-logind[1438]: Removed session 4. Dec 13 01:17:03.877651 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 59556 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:17:03.879296 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:03.883836 systemd-logind[1438]: New session 5 of user core. Dec 13 01:17:03.901872 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:17:03.951183 sshd[1590]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:03.969483 systemd[1]: sshd@4-10.0.0.147:22-10.0.0.1:59556.service: Deactivated successfully. Dec 13 01:17:03.971185 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:17:03.972745 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:17:03.973990 systemd[1]: Started sshd@5-10.0.0.147:22-10.0.0.1:59560.service - OpenSSH per-connection server daemon (10.0.0.1:59560). Dec 13 01:17:03.974960 systemd-logind[1438]: Removed session 5. Dec 13 01:17:04.008761 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 59560 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:17:04.010344 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:04.014548 systemd-logind[1438]: New session 6 of user core. Dec 13 01:17:04.031850 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:17:04.086308 sshd[1597]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:04.097605 systemd[1]: sshd@5-10.0.0.147:22-10.0.0.1:59560.service: Deactivated successfully. Dec 13 01:17:04.099421 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:17:04.101036 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:17:04.110040 systemd[1]: Started sshd@6-10.0.0.147:22-10.0.0.1:59574.service - OpenSSH per-connection server daemon (10.0.0.1:59574). Dec 13 01:17:04.110985 systemd-logind[1438]: Removed session 6. Dec 13 01:17:04.137161 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 59574 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:17:04.138478 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:04.142331 systemd-logind[1438]: New session 7 of user core. Dec 13 01:17:04.156957 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:17:04.213898 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:17:04.214228 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:17:04.710941 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:17:04.711126 (dockerd)[1625]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:17:04.978752 dockerd[1625]: time="2024-12-13T01:17:04.978579955Z" level=info msg="Starting up" Dec 13 01:17:05.082060 dockerd[1625]: time="2024-12-13T01:17:05.082004383Z" level=info msg="Loading containers: start." Dec 13 01:17:05.201760 kernel: Initializing XFRM netlink socket Dec 13 01:17:05.231645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:17:05.238921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:05.282378 systemd-networkd[1381]: docker0: Link UP Dec 13 01:17:05.435623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:05.440256 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:17:05.532740 dockerd[1625]: time="2024-12-13T01:17:05.532470147Z" level=info msg="Loading containers: done." Dec 13 01:17:05.534831 kubelet[1735]: E1213 01:17:05.534776 1735 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:17:05.541712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:17:05.541933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:17:05.546198 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck52273187-merged.mount: Deactivated successfully. Dec 13 01:17:05.562472 dockerd[1625]: time="2024-12-13T01:17:05.562421927Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:17:05.562580 dockerd[1625]: time="2024-12-13T01:17:05.562539678Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:17:05.562674 dockerd[1625]: time="2024-12-13T01:17:05.562653251Z" level=info msg="Daemon has completed initialization" Dec 13 01:17:05.597493 dockerd[1625]: time="2024-12-13T01:17:05.597436762Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:17:05.597981 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:17:06.312033 containerd[1454]: time="2024-12-13T01:17:06.311990285Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:17:06.863956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457683979.mount: Deactivated successfully. Dec 13 01:17:07.815303 containerd[1454]: time="2024-12-13T01:17:07.815250375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:07.816143 containerd[1454]: time="2024-12-13T01:17:07.816076103Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 01:17:07.818766 containerd[1454]: time="2024-12-13T01:17:07.818170331Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:07.821258 containerd[1454]: time="2024-12-13T01:17:07.821226111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:07.822209 containerd[1454]: time="2024-12-13T01:17:07.822164902Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 1.510130374s" Dec 13 01:17:07.822254 containerd[1454]: time="2024-12-13T01:17:07.822214505Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 01:17:07.843714 containerd[1454]: time="2024-12-13T01:17:07.843661997Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:17:09.842642 containerd[1454]: time="2024-12-13T01:17:09.842569241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:09.843453 containerd[1454]: time="2024-12-13T01:17:09.843395481Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 01:17:09.844757 containerd[1454]: time="2024-12-13T01:17:09.844708433Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:09.847817 containerd[1454]: time="2024-12-13T01:17:09.847779202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:09.849122 containerd[1454]: time="2024-12-13T01:17:09.849080873Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.005385443s" Dec 13 01:17:09.849122 containerd[1454]: time="2024-12-13T01:17:09.849112402Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 01:17:09.872627 containerd[1454]: time="2024-12-13T01:17:09.872567529Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:17:10.885921 containerd[1454]: time="2024-12-13T01:17:10.885861296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:10.887100 containerd[1454]: time="2024-12-13T01:17:10.886918409Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 01:17:10.888948 containerd[1454]: time="2024-12-13T01:17:10.888860571Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:10.891685 containerd[1454]: time="2024-12-13T01:17:10.891653869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:10.892714 containerd[1454]: time="2024-12-13T01:17:10.892674383Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.020059595s" Dec 13 01:17:10.892775 containerd[1454]: time="2024-12-13T01:17:10.892718837Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 01:17:10.914481 containerd[1454]: time="2024-12-13T01:17:10.914433619Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:17:11.905401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1325481348.mount: Deactivated successfully. Dec 13 01:17:12.247559 containerd[1454]: time="2024-12-13T01:17:12.247437397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:12.248261 containerd[1454]: time="2024-12-13T01:17:12.248222329Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 01:17:12.251746 containerd[1454]: time="2024-12-13T01:17:12.251667781Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:12.252478 containerd[1454]: time="2024-12-13T01:17:12.252448194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:12.253204 containerd[1454]: time="2024-12-13T01:17:12.253164057Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.338685874s" Dec 13 01:17:12.253243 containerd[1454]: time="2024-12-13T01:17:12.253201577Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:17:12.273642 containerd[1454]: time="2024-12-13T01:17:12.273598809Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:17:12.857388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3718887508.mount: Deactivated successfully. Dec 13 01:17:13.792007 containerd[1454]: time="2024-12-13T01:17:13.791946253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:13.792899 containerd[1454]: time="2024-12-13T01:17:13.792853695Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:17:13.794192 containerd[1454]: time="2024-12-13T01:17:13.794154614Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:13.796991 containerd[1454]: time="2024-12-13T01:17:13.796946340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:13.797964 containerd[1454]: time="2024-12-13T01:17:13.797921268Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.524281602s" Dec 13 01:17:13.798013 containerd[1454]: time="2024-12-13T01:17:13.797961123Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:17:13.819309 containerd[1454]: time="2024-12-13T01:17:13.819261729Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:17:14.377904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3428685410.mount: Deactivated successfully. Dec 13 01:17:14.384821 containerd[1454]: time="2024-12-13T01:17:14.384779867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:14.385610 containerd[1454]: time="2024-12-13T01:17:14.385558908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:17:14.386804 containerd[1454]: time="2024-12-13T01:17:14.386757496Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:14.388930 containerd[1454]: time="2024-12-13T01:17:14.388901277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:14.389613 containerd[1454]: time="2024-12-13T01:17:14.389582314Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 570.278576ms" Dec 13 01:17:14.389646 containerd[1454]: time="2024-12-13T01:17:14.389611088Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:17:14.413887 containerd[1454]: time="2024-12-13T01:17:14.413823144Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:17:15.059510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount931129072.mount: Deactivated successfully. Dec 13 01:17:15.792167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:17:15.798884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:15.940866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:15.943441 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:17:15.991945 kubelet[2007]: E1213 01:17:15.991820 2007 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:17:15.995155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:17:15.995337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:17:17.352218 containerd[1454]: time="2024-12-13T01:17:17.352148533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:17.353193 containerd[1454]: time="2024-12-13T01:17:17.353155181Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 01:17:17.354551 containerd[1454]: time="2024-12-13T01:17:17.354519439Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:17.358217 containerd[1454]: time="2024-12-13T01:17:17.358182258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:17.359394 containerd[1454]: time="2024-12-13T01:17:17.359354657Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.945485827s" Dec 13 01:17:17.359457 containerd[1454]: time="2024-12-13T01:17:17.359392348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 01:17:19.504554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:19.514007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:19.530182 systemd[1]: Reloading requested from client PID 2098 ('systemctl') (unit session-7.scope)... Dec 13 01:17:19.530198 systemd[1]: Reloading... Dec 13 01:17:19.606752 zram_generator::config[2140]: No configuration found. Dec 13 01:17:19.850941 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:17:19.927237 systemd[1]: Reloading finished in 396 ms. Dec 13 01:17:19.977662 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:17:19.977767 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:17:19.978022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:19.980319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:20.131781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:20.136216 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:17:20.174588 kubelet[2186]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:20.174588 kubelet[2186]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:17:20.174588 kubelet[2186]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:20.175463 kubelet[2186]: I1213 01:17:20.175396 2186 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:17:20.427945 kubelet[2186]: I1213 01:17:20.427844 2186 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:17:20.427945 kubelet[2186]: I1213 01:17:20.427868 2186 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:17:20.428101 kubelet[2186]: I1213 01:17:20.428086 2186 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:17:20.446737 kubelet[2186]: I1213 01:17:20.446681 2186 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:17:20.447229 kubelet[2186]: E1213 01:17:20.447199 2186 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:20.456454 kubelet[2186]: I1213 01:17:20.456412 2186 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:17:20.458007 kubelet[2186]: I1213 01:17:20.457971 2186 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:17:20.458154 kubelet[2186]: I1213 01:17:20.457996 2186 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:17:20.458536 kubelet[2186]: I1213 01:17:20.458514 2186 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:17:20.458536 kubelet[2186]: I1213 01:17:20.458530 2186 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:17:20.458676 kubelet[2186]: I1213 01:17:20.458657 2186 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:20.459232 kubelet[2186]: I1213 01:17:20.459208 2186 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:17:20.459232 kubelet[2186]: I1213 01:17:20.459225 2186 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:17:20.459281 kubelet[2186]: I1213 01:17:20.459245 2186 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:17:20.459281 kubelet[2186]: I1213 01:17:20.459263 2186 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:17:20.463646 kubelet[2186]: W1213 01:17:20.463543 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:20.463646 kubelet[2186]: E1213 01:17:20.463611 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:20.463810 kubelet[2186]: W1213 01:17:20.463651 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:20.463810 kubelet[2186]: E1213 01:17:20.463704 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:20.464244 kubelet[2186]: I1213 01:17:20.464000 2186 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:17:20.465305 kubelet[2186]: I1213 01:17:20.465279 2186 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:17:20.465352 kubelet[2186]: W1213 01:17:20.465339 2186 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:17:20.466039 kubelet[2186]: I1213 01:17:20.466017 2186 server.go:1264] "Started kubelet" Dec 13 01:17:20.468288 kubelet[2186]: I1213 01:17:20.467909 2186 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:17:20.469103 kubelet[2186]: I1213 01:17:20.469077 2186 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:17:20.472710 kubelet[2186]: I1213 01:17:20.472687 2186 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:17:20.473327 kubelet[2186]: I1213 01:17:20.472811 2186 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:17:20.473395 kubelet[2186]: I1213 01:17:20.473368 2186 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:17:20.474052 kubelet[2186]: W1213 01:17:20.473614 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:20.474052 kubelet[2186]: E1213 01:17:20.473651 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:20.474052 kubelet[2186]: E1213 01:17:20.473585 2186 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.147:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181097b068cdb94b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:17:20.465992011 +0000 UTC m=+0.326122237,LastTimestamp:2024-12-13 01:17:20.465992011 +0000 UTC m=+0.326122237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:17:20.474052 kubelet[2186]: E1213 01:17:20.473699 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="200ms" Dec 13 01:17:20.474052 kubelet[2186]: I1213 01:17:20.473883 2186 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:17:20.474286 kubelet[2186]: I1213 01:17:20.474151 2186 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:17:20.474627 kubelet[2186]: I1213 01:17:20.474603 2186 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:17:20.474627 kubelet[2186]: I1213 01:17:20.474621 2186 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:17:20.474704 kubelet[2186]: I1213 01:17:20.474696 2186 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:17:20.476170 kubelet[2186]: E1213 01:17:20.476049 2186 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:17:20.476170 kubelet[2186]: I1213 01:17:20.476065 2186 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:17:20.488937 kubelet[2186]: I1213 01:17:20.488870 2186 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:17:20.490172 kubelet[2186]: I1213 01:17:20.490092 2186 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:17:20.490172 kubelet[2186]: I1213 01:17:20.490125 2186 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:17:20.490172 kubelet[2186]: I1213 01:17:20.490146 2186 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:17:20.490264 kubelet[2186]: E1213 01:17:20.490187 2186 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:17:20.493634 kubelet[2186]: W1213 01:17:20.493546 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:20.493634 kubelet[2186]: E1213 01:17:20.493582 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:20.494007 kubelet[2186]: I1213 01:17:20.493986 2186 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:17:20.494007 kubelet[2186]: I1213 01:17:20.493999 2186 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:17:20.494071 kubelet[2186]: I1213 01:17:20.494014 2186 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:20.574474 kubelet[2186]: I1213 01:17:20.574448 2186 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:20.574715 kubelet[2186]: E1213 01:17:20.574696 2186 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Dec 13 01:17:20.590933 kubelet[2186]: E1213 01:17:20.590898 2186 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:17:20.674978 kubelet[2186]: E1213 01:17:20.674935 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="400ms" Dec 13 01:17:20.776088 kubelet[2186]: I1213 01:17:20.775980 2186 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:20.776197 kubelet[2186]: E1213 01:17:20.776137 2186 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Dec 13 01:17:20.790032 kubelet[2186]: I1213 01:17:20.789991 2186 policy_none.go:49] "None policy: Start" Dec 13 01:17:20.790631 kubelet[2186]: I1213 01:17:20.790561 2186 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:17:20.790631 kubelet[2186]: I1213 01:17:20.790594 2186 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:17:20.791709 kubelet[2186]: E1213 01:17:20.791676 2186 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:17:20.797150 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:17:20.807451 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:17:20.810206 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:17:20.830822 kubelet[2186]: I1213 01:17:20.830741 2186 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:17:20.831510 kubelet[2186]: I1213 01:17:20.831025 2186 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:17:20.831510 kubelet[2186]: I1213 01:17:20.831182 2186 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:17:20.832578 kubelet[2186]: E1213 01:17:20.832542 2186 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:17:21.075777 kubelet[2186]: E1213 01:17:21.075641 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="800ms" Dec 13 01:17:21.177987 kubelet[2186]: I1213 01:17:21.177967 2186 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:21.178337 kubelet[2186]: E1213 01:17:21.178264 2186 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Dec 13 01:17:21.192472 kubelet[2186]: I1213 01:17:21.192394 2186 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:17:21.193291 kubelet[2186]: I1213 01:17:21.193251 2186 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:17:21.194014 kubelet[2186]: I1213 01:17:21.193984 2186 topology_manager.go:215] "Topology Admit Handler" podUID="8fb22e82fe5ddc47ed998825768fc344" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:17:21.200114 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 01:17:21.218171 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 01:17:21.234405 systemd[1]: Created slice kubepods-burstable-pod8fb22e82fe5ddc47ed998825768fc344.slice - libcontainer container kubepods-burstable-pod8fb22e82fe5ddc47ed998825768fc344.slice. Dec 13 01:17:21.276254 kubelet[2186]: I1213 01:17:21.276225 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8fb22e82fe5ddc47ed998825768fc344-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8fb22e82fe5ddc47ed998825768fc344\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:21.276310 kubelet[2186]: I1213 01:17:21.276265 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8fb22e82fe5ddc47ed998825768fc344-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8fb22e82fe5ddc47ed998825768fc344\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:21.276310 kubelet[2186]: I1213 01:17:21.276287 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:21.276368 kubelet[2186]: I1213 01:17:21.276306 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:21.276368 kubelet[2186]: I1213 01:17:21.276327 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:21.276368 kubelet[2186]: I1213 01:17:21.276345 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:17:21.276368 kubelet[2186]: I1213 01:17:21.276364 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8fb22e82fe5ddc47ed998825768fc344-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8fb22e82fe5ddc47ed998825768fc344\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:21.276470 kubelet[2186]: I1213 01:17:21.276382 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:21.276470 kubelet[2186]: I1213 01:17:21.276438 2186 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:21.516127 kubelet[2186]: E1213 01:17:21.516058 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:21.516632 containerd[1454]: time="2024-12-13T01:17:21.516573196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:21.532182 kubelet[2186]: E1213 01:17:21.532152 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:21.532449 containerd[1454]: time="2024-12-13T01:17:21.532420886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:21.536697 kubelet[2186]: E1213 01:17:21.536676 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:21.537005 containerd[1454]: time="2024-12-13T01:17:21.536973864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8fb22e82fe5ddc47ed998825768fc344,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:21.633835 kubelet[2186]: W1213 01:17:21.633798 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:21.633835 kubelet[2186]: E1213 01:17:21.633835 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:21.736196 kubelet[2186]: W1213 01:17:21.736154 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:21.736196 kubelet[2186]: E1213 01:17:21.736192 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:21.810011 kubelet[2186]: W1213 01:17:21.809856 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:21.810011 kubelet[2186]: E1213 01:17:21.809926 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:21.876763 kubelet[2186]: E1213 01:17:21.876697 2186 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="1.6s" Dec 13 01:17:21.979315 kubelet[2186]: I1213 01:17:21.979276 2186 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:21.979670 kubelet[2186]: E1213 01:17:21.979627 2186 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Dec 13 01:17:22.000061 kubelet[2186]: W1213 01:17:22.000007 2186 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:22.000061 kubelet[2186]: E1213 01:17:22.000055 2186 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Dec 13 01:17:22.025653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1356833185.mount: Deactivated successfully. Dec 13 01:17:22.032963 containerd[1454]: time="2024-12-13T01:17:22.032918237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:22.035117 containerd[1454]: time="2024-12-13T01:17:22.035069862Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:17:22.036148 containerd[1454]: time="2024-12-13T01:17:22.036123348Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:22.037199 containerd[1454]: time="2024-12-13T01:17:22.037166143Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:22.038234 containerd[1454]: time="2024-12-13T01:17:22.038193590Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:22.039184 containerd[1454]: time="2024-12-13T01:17:22.039153160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:17:22.040249 containerd[1454]: time="2024-12-13T01:17:22.040206194Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:17:22.042479 containerd[1454]: time="2024-12-13T01:17:22.042451185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:22.044095 containerd[1454]: time="2024-12-13T01:17:22.044064500Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 527.410513ms" Dec 13 01:17:22.044761 containerd[1454]: time="2024-12-13T01:17:22.044723035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.256955ms" Dec 13 01:17:22.045366 containerd[1454]: time="2024-12-13T01:17:22.045340633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 508.303851ms" Dec 13 01:17:22.178855 containerd[1454]: time="2024-12-13T01:17:22.177922822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:22.178855 containerd[1454]: time="2024-12-13T01:17:22.178208408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:22.178855 containerd[1454]: time="2024-12-13T01:17:22.178226963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:22.178855 containerd[1454]: time="2024-12-13T01:17:22.178319366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:22.178855 containerd[1454]: time="2024-12-13T01:17:22.177668355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:22.179641 containerd[1454]: time="2024-12-13T01:17:22.179335421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:22.179641 containerd[1454]: time="2024-12-13T01:17:22.179419719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:22.179641 containerd[1454]: time="2024-12-13T01:17:22.179557328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:22.180995 containerd[1454]: time="2024-12-13T01:17:22.180153596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:22.180995 containerd[1454]: time="2024-12-13T01:17:22.180336579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:22.180995 containerd[1454]: time="2024-12-13T01:17:22.180351807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:22.181212 containerd[1454]: time="2024-12-13T01:17:22.181157098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:22.207850 systemd[1]: Started cri-containerd-528e365181b902e42f197bb5d117ebffc2a0e4ee113a4eb0b7465caf1b2041d8.scope - libcontainer container 528e365181b902e42f197bb5d117ebffc2a0e4ee113a4eb0b7465caf1b2041d8. Dec 13 01:17:22.209531 systemd[1]: Started cri-containerd-6af358fb8549dbc3eab77a6dcba001fe7496fe63b06f95cacfca3587d6b21c63.scope - libcontainer container 6af358fb8549dbc3eab77a6dcba001fe7496fe63b06f95cacfca3587d6b21c63. Dec 13 01:17:22.211094 systemd[1]: Started cri-containerd-e55bb80ff1c98495e0f52cbba698147781b1d094c2a01ed04e2128c5ad27714f.scope - libcontainer container e55bb80ff1c98495e0f52cbba698147781b1d094c2a01ed04e2128c5ad27714f. Dec 13 01:17:22.244117 containerd[1454]: time="2024-12-13T01:17:22.243995051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8fb22e82fe5ddc47ed998825768fc344,Namespace:kube-system,Attempt:0,} returns sandbox id \"528e365181b902e42f197bb5d117ebffc2a0e4ee113a4eb0b7465caf1b2041d8\"" Dec 13 01:17:22.245739 kubelet[2186]: E1213 01:17:22.245603 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:22.250422 containerd[1454]: time="2024-12-13T01:17:22.250205598Z" level=info msg="CreateContainer within sandbox \"528e365181b902e42f197bb5d117ebffc2a0e4ee113a4eb0b7465caf1b2041d8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:17:22.253922 containerd[1454]: time="2024-12-13T01:17:22.253879819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"6af358fb8549dbc3eab77a6dcba001fe7496fe63b06f95cacfca3587d6b21c63\"" Dec 13 01:17:22.255295 kubelet[2186]: E1213 01:17:22.255213 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:22.257029 containerd[1454]: time="2024-12-13T01:17:22.256978029Z" level=info msg="CreateContainer within sandbox \"6af358fb8549dbc3eab77a6dcba001fe7496fe63b06f95cacfca3587d6b21c63\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:17:22.257887 containerd[1454]: time="2024-12-13T01:17:22.257862247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e55bb80ff1c98495e0f52cbba698147781b1d094c2a01ed04e2128c5ad27714f\"" Dec 13 01:17:22.258386 kubelet[2186]: E1213 01:17:22.258266 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:22.260892 containerd[1454]: time="2024-12-13T01:17:22.260855150Z" level=info msg="CreateContainer within sandbox \"e55bb80ff1c98495e0f52cbba698147781b1d094c2a01ed04e2128c5ad27714f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:17:22.285928 containerd[1454]: time="2024-12-13T01:17:22.285891041Z" level=info msg="CreateContainer within sandbox \"6af358fb8549dbc3eab77a6dcba001fe7496fe63b06f95cacfca3587d6b21c63\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2057e494db82bccc7d46aeffb2bfb576823de0624db0b05cac36e905a69190ca\"" Dec 13 01:17:22.286674 containerd[1454]: time="2024-12-13T01:17:22.286636339Z" level=info msg="StartContainer for \"2057e494db82bccc7d46aeffb2bfb576823de0624db0b05cac36e905a69190ca\"" Dec 13 01:17:22.287570 containerd[1454]: time="2024-12-13T01:17:22.287526007Z" level=info msg="CreateContainer within sandbox \"528e365181b902e42f197bb5d117ebffc2a0e4ee113a4eb0b7465caf1b2041d8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"22d6b7fdff9c897e50cdad8e05ca53ef18cfc7dbe642d0f9f91ec5d1d652d277\"" Dec 13 01:17:22.287996 containerd[1454]: time="2024-12-13T01:17:22.287921699Z" level=info msg="StartContainer for \"22d6b7fdff9c897e50cdad8e05ca53ef18cfc7dbe642d0f9f91ec5d1d652d277\"" Dec 13 01:17:22.291269 containerd[1454]: time="2024-12-13T01:17:22.291233941Z" level=info msg="CreateContainer within sandbox \"e55bb80ff1c98495e0f52cbba698147781b1d094c2a01ed04e2128c5ad27714f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e526371c8fa30ae418f4616d57304607abdbd8f1cf86822e650363b3ae6acfb2\"" Dec 13 01:17:22.293082 containerd[1454]: time="2024-12-13T01:17:22.291607331Z" level=info msg="StartContainer for \"e526371c8fa30ae418f4616d57304607abdbd8f1cf86822e650363b3ae6acfb2\"" Dec 13 01:17:22.319858 systemd[1]: Started cri-containerd-2057e494db82bccc7d46aeffb2bfb576823de0624db0b05cac36e905a69190ca.scope - libcontainer container 2057e494db82bccc7d46aeffb2bfb576823de0624db0b05cac36e905a69190ca. Dec 13 01:17:22.321060 systemd[1]: Started cri-containerd-22d6b7fdff9c897e50cdad8e05ca53ef18cfc7dbe642d0f9f91ec5d1d652d277.scope - libcontainer container 22d6b7fdff9c897e50cdad8e05ca53ef18cfc7dbe642d0f9f91ec5d1d652d277. Dec 13 01:17:22.324099 systemd[1]: Started cri-containerd-e526371c8fa30ae418f4616d57304607abdbd8f1cf86822e650363b3ae6acfb2.scope - libcontainer container e526371c8fa30ae418f4616d57304607abdbd8f1cf86822e650363b3ae6acfb2. Dec 13 01:17:22.443086 containerd[1454]: time="2024-12-13T01:17:22.442089810Z" level=info msg="StartContainer for \"e526371c8fa30ae418f4616d57304607abdbd8f1cf86822e650363b3ae6acfb2\" returns successfully" Dec 13 01:17:22.443086 containerd[1454]: time="2024-12-13T01:17:22.442226496Z" level=info msg="StartContainer for \"22d6b7fdff9c897e50cdad8e05ca53ef18cfc7dbe642d0f9f91ec5d1d652d277\" returns successfully" Dec 13 01:17:22.443086 containerd[1454]: time="2024-12-13T01:17:22.442247545Z" level=info msg="StartContainer for \"2057e494db82bccc7d46aeffb2bfb576823de0624db0b05cac36e905a69190ca\" returns successfully" Dec 13 01:17:22.501567 kubelet[2186]: E1213 01:17:22.501250 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:22.502855 kubelet[2186]: E1213 01:17:22.502701 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:22.505000 kubelet[2186]: E1213 01:17:22.504968 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:23.494262 kubelet[2186]: E1213 01:17:23.494214 2186 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:17:23.506984 kubelet[2186]: E1213 01:17:23.506951 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:23.582103 kubelet[2186]: I1213 01:17:23.581907 2186 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:23.588883 kubelet[2186]: I1213 01:17:23.588859 2186 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:17:23.598115 kubelet[2186]: E1213 01:17:23.597692 2186 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:23.698242 kubelet[2186]: E1213 01:17:23.698195 2186 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:23.798894 kubelet[2186]: E1213 01:17:23.798778 2186 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:23.899138 kubelet[2186]: E1213 01:17:23.899106 2186 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:23.999954 kubelet[2186]: E1213 01:17:23.999938 2186 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:24.100446 kubelet[2186]: E1213 01:17:24.100376 2186 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:24.200934 kubelet[2186]: E1213 01:17:24.200899 2186 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:24.466514 kubelet[2186]: I1213 01:17:24.466408 2186 apiserver.go:52] "Watching apiserver" Dec 13 01:17:24.474166 kubelet[2186]: I1213 01:17:24.474143 2186 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:17:24.514878 kubelet[2186]: E1213 01:17:24.514844 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:25.508031 kubelet[2186]: E1213 01:17:25.507991 2186 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:25.570252 systemd[1]: Reloading requested from client PID 2463 ('systemctl') (unit session-7.scope)... Dec 13 01:17:25.570268 systemd[1]: Reloading... Dec 13 01:17:25.662755 zram_generator::config[2502]: No configuration found. Dec 13 01:17:25.789074 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:17:25.876879 systemd[1]: Reloading finished in 306 ms. Dec 13 01:17:25.921400 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:25.945317 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:17:25.945635 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:25.957149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:26.112878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:26.118415 (kubelet)[2547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:17:26.161204 kubelet[2547]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:26.161204 kubelet[2547]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:17:26.161204 kubelet[2547]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:26.161637 kubelet[2547]: I1213 01:17:26.161248 2547 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:17:26.165975 kubelet[2547]: I1213 01:17:26.165923 2547 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:17:26.165975 kubelet[2547]: I1213 01:17:26.165961 2547 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:17:26.166198 kubelet[2547]: I1213 01:17:26.166173 2547 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:17:26.167498 kubelet[2547]: I1213 01:17:26.167476 2547 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:17:26.168590 kubelet[2547]: I1213 01:17:26.168544 2547 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:17:26.177291 kubelet[2547]: I1213 01:17:26.177258 2547 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:17:26.177522 kubelet[2547]: I1213 01:17:26.177488 2547 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:17:26.177677 kubelet[2547]: I1213 01:17:26.177515 2547 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:17:26.177774 kubelet[2547]: I1213 01:17:26.177694 2547 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:17:26.177774 kubelet[2547]: I1213 01:17:26.177704 2547 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:17:26.177774 kubelet[2547]: I1213 01:17:26.177769 2547 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:26.177891 kubelet[2547]: I1213 01:17:26.177878 2547 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:17:26.177920 kubelet[2547]: I1213 01:17:26.177894 2547 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:17:26.177920 kubelet[2547]: I1213 01:17:26.177915 2547 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:17:26.177988 kubelet[2547]: I1213 01:17:26.177934 2547 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:17:26.178846 kubelet[2547]: I1213 01:17:26.178652 2547 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:17:26.178969 kubelet[2547]: I1213 01:17:26.178866 2547 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:17:26.180863 kubelet[2547]: I1213 01:17:26.179388 2547 server.go:1264] "Started kubelet" Dec 13 01:17:26.180863 kubelet[2547]: I1213 01:17:26.179902 2547 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:17:26.180863 kubelet[2547]: I1213 01:17:26.180152 2547 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:17:26.180863 kubelet[2547]: I1213 01:17:26.180183 2547 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:17:26.180863 kubelet[2547]: I1213 01:17:26.180531 2547 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:17:26.182859 kubelet[2547]: I1213 01:17:26.182834 2547 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:17:26.185761 kubelet[2547]: I1213 01:17:26.184523 2547 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:17:26.185761 kubelet[2547]: I1213 01:17:26.184623 2547 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:17:26.185761 kubelet[2547]: I1213 01:17:26.184773 2547 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:17:26.191051 kubelet[2547]: I1213 01:17:26.190536 2547 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:17:26.191051 kubelet[2547]: I1213 01:17:26.190624 2547 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:17:26.191611 kubelet[2547]: E1213 01:17:26.191583 2547 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:17:26.192056 kubelet[2547]: I1213 01:17:26.192031 2547 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:17:26.194446 kubelet[2547]: I1213 01:17:26.194401 2547 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:17:26.196650 kubelet[2547]: I1213 01:17:26.196618 2547 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:17:26.196702 kubelet[2547]: I1213 01:17:26.196657 2547 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:17:26.196702 kubelet[2547]: I1213 01:17:26.196683 2547 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:17:26.196824 kubelet[2547]: E1213 01:17:26.196748 2547 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:17:26.222770 kubelet[2547]: I1213 01:17:26.222714 2547 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:17:26.222770 kubelet[2547]: I1213 01:17:26.222745 2547 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:17:26.222770 kubelet[2547]: I1213 01:17:26.222762 2547 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:26.223126 kubelet[2547]: I1213 01:17:26.222877 2547 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:17:26.223126 kubelet[2547]: I1213 01:17:26.222887 2547 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:17:26.223126 kubelet[2547]: I1213 01:17:26.222904 2547 policy_none.go:49] "None policy: Start" Dec 13 01:17:26.223446 kubelet[2547]: I1213 01:17:26.223406 2547 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:17:26.223446 kubelet[2547]: I1213 01:17:26.223426 2547 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:17:26.223565 kubelet[2547]: I1213 01:17:26.223548 2547 state_mem.go:75] "Updated machine memory state" Dec 13 01:17:26.228023 kubelet[2547]: I1213 01:17:26.227986 2547 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:17:26.228258 kubelet[2547]: I1213 01:17:26.228215 2547 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:17:26.228327 kubelet[2547]: I1213 01:17:26.228309 2547 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:17:26.288615 kubelet[2547]: I1213 01:17:26.288584 2547 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:26.294471 kubelet[2547]: I1213 01:17:26.294443 2547 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:17:26.294545 kubelet[2547]: I1213 01:17:26.294509 2547 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:17:26.297486 kubelet[2547]: I1213 01:17:26.297043 2547 topology_manager.go:215] "Topology Admit Handler" podUID="8fb22e82fe5ddc47ed998825768fc344" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:17:26.297486 kubelet[2547]: I1213 01:17:26.297182 2547 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:17:26.297486 kubelet[2547]: I1213 01:17:26.297230 2547 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:17:26.304623 kubelet[2547]: E1213 01:17:26.304591 2547 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:26.385911 kubelet[2547]: I1213 01:17:26.385819 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8fb22e82fe5ddc47ed998825768fc344-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8fb22e82fe5ddc47ed998825768fc344\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:26.385911 kubelet[2547]: I1213 01:17:26.385848 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:26.385911 kubelet[2547]: I1213 01:17:26.385875 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:26.385911 kubelet[2547]: I1213 01:17:26.385903 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:26.386060 kubelet[2547]: I1213 01:17:26.385964 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:26.386060 kubelet[2547]: I1213 01:17:26.386011 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:17:26.386060 kubelet[2547]: I1213 01:17:26.386029 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8fb22e82fe5ddc47ed998825768fc344-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8fb22e82fe5ddc47ed998825768fc344\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:26.386060 kubelet[2547]: I1213 01:17:26.386046 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8fb22e82fe5ddc47ed998825768fc344-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8fb22e82fe5ddc47ed998825768fc344\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:26.386186 kubelet[2547]: I1213 01:17:26.386102 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:26.605606 kubelet[2547]: E1213 01:17:26.605478 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:26.605606 kubelet[2547]: E1213 01:17:26.605486 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:26.605923 kubelet[2547]: E1213 01:17:26.605754 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:27.178788 kubelet[2547]: I1213 01:17:27.178738 2547 apiserver.go:52] "Watching apiserver" Dec 13 01:17:27.185412 kubelet[2547]: I1213 01:17:27.185341 2547 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:17:27.209055 kubelet[2547]: E1213 01:17:27.208856 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:27.209055 kubelet[2547]: E1213 01:17:27.208986 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:27.214473 kubelet[2547]: E1213 01:17:27.214445 2547 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:27.215072 kubelet[2547]: E1213 01:17:27.214939 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:27.215257 sudo[1607]: pam_unix(sudo:session): session closed for user root Dec 13 01:17:27.217568 sshd[1604]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:27.223529 systemd[1]: sshd@6-10.0.0.147:22-10.0.0.1:59574.service: Deactivated successfully. Dec 13 01:17:27.225510 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:17:27.225707 systemd[1]: session-7.scope: Consumed 3.593s CPU time, 195.0M memory peak, 0B memory swap peak. Dec 13 01:17:27.226390 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:17:27.227404 systemd-logind[1438]: Removed session 7. Dec 13 01:17:27.238782 kubelet[2547]: I1213 01:17:27.238717 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.238696349 podStartE2EDuration="3.238696349s" podCreationTimestamp="2024-12-13 01:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:27.232393915 +0000 UTC m=+1.110200591" watchObservedRunningTime="2024-12-13 01:17:27.238696349 +0000 UTC m=+1.116503015" Dec 13 01:17:27.238980 kubelet[2547]: I1213 01:17:27.238953 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.238946775 podStartE2EDuration="1.238946775s" podCreationTimestamp="2024-12-13 01:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:27.238931427 +0000 UTC m=+1.116738093" watchObservedRunningTime="2024-12-13 01:17:27.238946775 +0000 UTC m=+1.116753442" Dec 13 01:17:27.245621 kubelet[2547]: I1213 01:17:27.245582 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.245566213 podStartE2EDuration="1.245566213s" podCreationTimestamp="2024-12-13 01:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:27.24536991 +0000 UTC m=+1.123176576" watchObservedRunningTime="2024-12-13 01:17:27.245566213 +0000 UTC m=+1.123372879" Dec 13 01:17:28.210306 kubelet[2547]: E1213 01:17:28.210263 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:28.402941 kubelet[2547]: E1213 01:17:28.402896 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:32.873286 kubelet[2547]: E1213 01:17:32.873255 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:33.215140 kubelet[2547]: E1213 01:17:33.215002 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:36.133466 kubelet[2547]: E1213 01:17:36.133432 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:36.218827 kubelet[2547]: E1213 01:17:36.218770 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:38.406823 kubelet[2547]: E1213 01:17:38.406795 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:38.471955 update_engine[1444]: I20241213 01:17:38.471892 1444 update_attempter.cc:509] Updating boot flags... Dec 13 01:17:38.497792 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2622) Dec 13 01:17:38.545908 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2624) Dec 13 01:17:38.590606 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2624) Dec 13 01:17:40.686803 kubelet[2547]: I1213 01:17:40.686768 2547 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:17:40.687237 containerd[1454]: time="2024-12-13T01:17:40.687152017Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:17:40.687560 kubelet[2547]: I1213 01:17:40.687310 2547 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:17:41.297545 kubelet[2547]: I1213 01:17:41.297332 2547 topology_manager.go:215] "Topology Admit Handler" podUID="6dcfddba-268c-4a6b-8578-88a3570632ae" podNamespace="kube-system" podName="kube-proxy-7c5kr" Dec 13 01:17:41.302904 kubelet[2547]: I1213 01:17:41.302740 2547 topology_manager.go:215] "Topology Admit Handler" podUID="3da21b09-6208-469e-9b80-e4a8cb1a9f5e" podNamespace="kube-flannel" podName="kube-flannel-ds-z9nz9" Dec 13 01:17:41.313147 systemd[1]: Created slice kubepods-burstable-pod3da21b09_6208_469e_9b80_e4a8cb1a9f5e.slice - libcontainer container kubepods-burstable-pod3da21b09_6208_469e_9b80_e4a8cb1a9f5e.slice. Dec 13 01:17:41.318174 systemd[1]: Created slice kubepods-besteffort-pod6dcfddba_268c_4a6b_8578_88a3570632ae.slice - libcontainer container kubepods-besteffort-pod6dcfddba_268c_4a6b_8578_88a3570632ae.slice. Dec 13 01:17:41.478180 kubelet[2547]: I1213 01:17:41.478126 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/3da21b09-6208-469e-9b80-e4a8cb1a9f5e-cni-plugin\") pod \"kube-flannel-ds-z9nz9\" (UID: \"3da21b09-6208-469e-9b80-e4a8cb1a9f5e\") " pod="kube-flannel/kube-flannel-ds-z9nz9" Dec 13 01:17:41.478180 kubelet[2547]: I1213 01:17:41.478171 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/3da21b09-6208-469e-9b80-e4a8cb1a9f5e-flannel-cfg\") pod \"kube-flannel-ds-z9nz9\" (UID: \"3da21b09-6208-469e-9b80-e4a8cb1a9f5e\") " pod="kube-flannel/kube-flannel-ds-z9nz9" Dec 13 01:17:41.478180 kubelet[2547]: I1213 01:17:41.478186 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjw9m\" (UniqueName: \"kubernetes.io/projected/3da21b09-6208-469e-9b80-e4a8cb1a9f5e-kube-api-access-hjw9m\") pod \"kube-flannel-ds-z9nz9\" (UID: \"3da21b09-6208-469e-9b80-e4a8cb1a9f5e\") " pod="kube-flannel/kube-flannel-ds-z9nz9" Dec 13 01:17:41.478180 kubelet[2547]: I1213 01:17:41.478205 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7shc\" (UniqueName: \"kubernetes.io/projected/6dcfddba-268c-4a6b-8578-88a3570632ae-kube-api-access-h7shc\") pod \"kube-proxy-7c5kr\" (UID: \"6dcfddba-268c-4a6b-8578-88a3570632ae\") " pod="kube-system/kube-proxy-7c5kr" Dec 13 01:17:41.478475 kubelet[2547]: I1213 01:17:41.478219 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3da21b09-6208-469e-9b80-e4a8cb1a9f5e-run\") pod \"kube-flannel-ds-z9nz9\" (UID: \"3da21b09-6208-469e-9b80-e4a8cb1a9f5e\") " pod="kube-flannel/kube-flannel-ds-z9nz9" Dec 13 01:17:41.478475 kubelet[2547]: I1213 01:17:41.478233 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3da21b09-6208-469e-9b80-e4a8cb1a9f5e-xtables-lock\") pod \"kube-flannel-ds-z9nz9\" (UID: \"3da21b09-6208-469e-9b80-e4a8cb1a9f5e\") " pod="kube-flannel/kube-flannel-ds-z9nz9" Dec 13 01:17:41.478475 kubelet[2547]: I1213 01:17:41.478248 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6dcfddba-268c-4a6b-8578-88a3570632ae-xtables-lock\") pod \"kube-proxy-7c5kr\" (UID: \"6dcfddba-268c-4a6b-8578-88a3570632ae\") " pod="kube-system/kube-proxy-7c5kr" Dec 13 01:17:41.478475 kubelet[2547]: I1213 01:17:41.478261 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6dcfddba-268c-4a6b-8578-88a3570632ae-lib-modules\") pod \"kube-proxy-7c5kr\" (UID: \"6dcfddba-268c-4a6b-8578-88a3570632ae\") " pod="kube-system/kube-proxy-7c5kr" Dec 13 01:17:41.478475 kubelet[2547]: I1213 01:17:41.478328 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6dcfddba-268c-4a6b-8578-88a3570632ae-kube-proxy\") pod \"kube-proxy-7c5kr\" (UID: \"6dcfddba-268c-4a6b-8578-88a3570632ae\") " pod="kube-system/kube-proxy-7c5kr" Dec 13 01:17:41.478475 kubelet[2547]: I1213 01:17:41.478371 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/3da21b09-6208-469e-9b80-e4a8cb1a9f5e-cni\") pod \"kube-flannel-ds-z9nz9\" (UID: \"3da21b09-6208-469e-9b80-e4a8cb1a9f5e\") " pod="kube-flannel/kube-flannel-ds-z9nz9" Dec 13 01:17:41.750054 kubelet[2547]: E1213 01:17:41.749882 2547 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:17:41.750054 kubelet[2547]: E1213 01:17:41.749912 2547 projected.go:200] Error preparing data for projected volume kube-api-access-h7shc for pod kube-system/kube-proxy-7c5kr: configmap "kube-root-ca.crt" not found Dec 13 01:17:41.750054 kubelet[2547]: E1213 01:17:41.749978 2547 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dcfddba-268c-4a6b-8578-88a3570632ae-kube-api-access-h7shc podName:6dcfddba-268c-4a6b-8578-88a3570632ae nodeName:}" failed. No retries permitted until 2024-12-13 01:17:42.24995895 +0000 UTC m=+16.127765616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h7shc" (UniqueName: "kubernetes.io/projected/6dcfddba-268c-4a6b-8578-88a3570632ae-kube-api-access-h7shc") pod "kube-proxy-7c5kr" (UID: "6dcfddba-268c-4a6b-8578-88a3570632ae") : configmap "kube-root-ca.crt" not found Dec 13 01:17:41.916651 kubelet[2547]: E1213 01:17:41.916593 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:41.917369 containerd[1454]: time="2024-12-13T01:17:41.917317745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-z9nz9,Uid:3da21b09-6208-469e-9b80-e4a8cb1a9f5e,Namespace:kube-flannel,Attempt:0,}" Dec 13 01:17:42.359087 containerd[1454]: time="2024-12-13T01:17:42.358699640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:42.359087 containerd[1454]: time="2024-12-13T01:17:42.358825007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:42.359087 containerd[1454]: time="2024-12-13T01:17:42.358843261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:42.359087 containerd[1454]: time="2024-12-13T01:17:42.358911420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:42.379846 systemd[1]: Started cri-containerd-ddfdaf986eee219993ef86c8815059cc0f790e4889faf2be728f3c320c75b4c3.scope - libcontainer container ddfdaf986eee219993ef86c8815059cc0f790e4889faf2be728f3c320c75b4c3. Dec 13 01:17:42.411999 containerd[1454]: time="2024-12-13T01:17:42.411964681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-z9nz9,Uid:3da21b09-6208-469e-9b80-e4a8cb1a9f5e,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"ddfdaf986eee219993ef86c8815059cc0f790e4889faf2be728f3c320c75b4c3\"" Dec 13 01:17:42.412645 kubelet[2547]: E1213 01:17:42.412625 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:42.413646 containerd[1454]: time="2024-12-13T01:17:42.413609092Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 01:17:42.532552 kubelet[2547]: E1213 01:17:42.532495 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:42.533087 containerd[1454]: time="2024-12-13T01:17:42.533045767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7c5kr,Uid:6dcfddba-268c-4a6b-8578-88a3570632ae,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:42.740435 containerd[1454]: time="2024-12-13T01:17:42.740236835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:42.740789 containerd[1454]: time="2024-12-13T01:17:42.740316976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:42.740789 containerd[1454]: time="2024-12-13T01:17:42.740593939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:42.740882 containerd[1454]: time="2024-12-13T01:17:42.740798244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:42.769932 systemd[1]: Started cri-containerd-284ee5778d1757f3df4f0f3f7882e9c2a4accc753034bcaa21374f7a7f97b95d.scope - libcontainer container 284ee5778d1757f3df4f0f3f7882e9c2a4accc753034bcaa21374f7a7f97b95d. Dec 13 01:17:42.792477 containerd[1454]: time="2024-12-13T01:17:42.792432770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7c5kr,Uid:6dcfddba-268c-4a6b-8578-88a3570632ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"284ee5778d1757f3df4f0f3f7882e9c2a4accc753034bcaa21374f7a7f97b95d\"" Dec 13 01:17:42.793172 kubelet[2547]: E1213 01:17:42.793144 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:42.798758 containerd[1454]: time="2024-12-13T01:17:42.795685723Z" level=info msg="CreateContainer within sandbox \"284ee5778d1757f3df4f0f3f7882e9c2a4accc753034bcaa21374f7a7f97b95d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:17:42.812860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3230948342.mount: Deactivated successfully. Dec 13 01:17:42.814137 containerd[1454]: time="2024-12-13T01:17:42.814087596Z" level=info msg="CreateContainer within sandbox \"284ee5778d1757f3df4f0f3f7882e9c2a4accc753034bcaa21374f7a7f97b95d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"972de6e6cf528f951fc098f4dcc189a1baedcf4ec04056e828ea068447c9218c\"" Dec 13 01:17:42.814832 containerd[1454]: time="2024-12-13T01:17:42.814773079Z" level=info msg="StartContainer for \"972de6e6cf528f951fc098f4dcc189a1baedcf4ec04056e828ea068447c9218c\"" Dec 13 01:17:42.852955 systemd[1]: Started cri-containerd-972de6e6cf528f951fc098f4dcc189a1baedcf4ec04056e828ea068447c9218c.scope - libcontainer container 972de6e6cf528f951fc098f4dcc189a1baedcf4ec04056e828ea068447c9218c. Dec 13 01:17:42.883263 containerd[1454]: time="2024-12-13T01:17:42.883216225Z" level=info msg="StartContainer for \"972de6e6cf528f951fc098f4dcc189a1baedcf4ec04056e828ea068447c9218c\" returns successfully" Dec 13 01:17:43.230013 kubelet[2547]: E1213 01:17:43.229483 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:43.237405 kubelet[2547]: I1213 01:17:43.237329 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7c5kr" podStartSLOduration=2.237307098 podStartE2EDuration="2.237307098s" podCreationTimestamp="2024-12-13 01:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:43.236688933 +0000 UTC m=+17.114495629" watchObservedRunningTime="2024-12-13 01:17:43.237307098 +0000 UTC m=+17.115113764" Dec 13 01:17:44.673847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1553168787.mount: Deactivated successfully. Dec 13 01:17:44.844853 containerd[1454]: time="2024-12-13T01:17:44.844799732Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:44.890102 containerd[1454]: time="2024-12-13T01:17:44.890038998Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Dec 13 01:17:44.935410 containerd[1454]: time="2024-12-13T01:17:44.935240312Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:44.946458 containerd[1454]: time="2024-12-13T01:17:44.946387214Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:44.947174 containerd[1454]: time="2024-12-13T01:17:44.947116859Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.533473423s" Dec 13 01:17:44.947174 containerd[1454]: time="2024-12-13T01:17:44.947165981Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 01:17:44.949668 containerd[1454]: time="2024-12-13T01:17:44.949604226Z" level=info msg="CreateContainer within sandbox \"ddfdaf986eee219993ef86c8815059cc0f790e4889faf2be728f3c320c75b4c3\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 01:17:44.982479 containerd[1454]: time="2024-12-13T01:17:44.982422658Z" level=info msg="CreateContainer within sandbox \"ddfdaf986eee219993ef86c8815059cc0f790e4889faf2be728f3c320c75b4c3\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"958c111c25256fae46fe8bc284e4900fe833d7372745f79d26a68626c619c886\"" Dec 13 01:17:44.983249 containerd[1454]: time="2024-12-13T01:17:44.983210553Z" level=info msg="StartContainer for \"958c111c25256fae46fe8bc284e4900fe833d7372745f79d26a68626c619c886\"" Dec 13 01:17:45.011858 systemd[1]: Started cri-containerd-958c111c25256fae46fe8bc284e4900fe833d7372745f79d26a68626c619c886.scope - libcontainer container 958c111c25256fae46fe8bc284e4900fe833d7372745f79d26a68626c619c886. Dec 13 01:17:45.038573 systemd[1]: cri-containerd-958c111c25256fae46fe8bc284e4900fe833d7372745f79d26a68626c619c886.scope: Deactivated successfully. Dec 13 01:17:45.040815 containerd[1454]: time="2024-12-13T01:17:45.040772223Z" level=info msg="StartContainer for \"958c111c25256fae46fe8bc284e4900fe833d7372745f79d26a68626c619c886\" returns successfully" Dec 13 01:17:45.059203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-958c111c25256fae46fe8bc284e4900fe833d7372745f79d26a68626c619c886-rootfs.mount: Deactivated successfully. Dec 13 01:17:45.102717 containerd[1454]: time="2024-12-13T01:17:45.102661073Z" level=info msg="shim disconnected" id=958c111c25256fae46fe8bc284e4900fe833d7372745f79d26a68626c619c886 namespace=k8s.io Dec 13 01:17:45.102717 containerd[1454]: time="2024-12-13T01:17:45.102709585Z" level=warning msg="cleaning up after shim disconnected" id=958c111c25256fae46fe8bc284e4900fe833d7372745f79d26a68626c619c886 namespace=k8s.io Dec 13 01:17:45.102717 containerd[1454]: time="2024-12-13T01:17:45.102718131Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:45.234988 kubelet[2547]: E1213 01:17:45.234874 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:45.235471 containerd[1454]: time="2024-12-13T01:17:45.235426826Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 01:17:46.923869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount879720855.mount: Deactivated successfully. Dec 13 01:17:47.387709 containerd[1454]: time="2024-12-13T01:17:47.387659448Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:47.388606 containerd[1454]: time="2024-12-13T01:17:47.388539886Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Dec 13 01:17:47.389842 containerd[1454]: time="2024-12-13T01:17:47.389815677Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:47.392607 containerd[1454]: time="2024-12-13T01:17:47.392581554Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:47.393432 containerd[1454]: time="2024-12-13T01:17:47.393406898Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.157943383s" Dec 13 01:17:47.393477 containerd[1454]: time="2024-12-13T01:17:47.393433107Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 01:17:47.395875 containerd[1454]: time="2024-12-13T01:17:47.395849988Z" level=info msg="CreateContainer within sandbox \"ddfdaf986eee219993ef86c8815059cc0f790e4889faf2be728f3c320c75b4c3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:17:47.410854 containerd[1454]: time="2024-12-13T01:17:47.410800743Z" level=info msg="CreateContainer within sandbox \"ddfdaf986eee219993ef86c8815059cc0f790e4889faf2be728f3c320c75b4c3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7e91523d2513845bc0e2b5f5085adbe92b5b522ca5b447e607f17fc350c284d5\"" Dec 13 01:17:47.410869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4229424811.mount: Deactivated successfully. Dec 13 01:17:47.411391 containerd[1454]: time="2024-12-13T01:17:47.411324739Z" level=info msg="StartContainer for \"7e91523d2513845bc0e2b5f5085adbe92b5b522ca5b447e607f17fc350c284d5\"" Dec 13 01:17:47.440061 systemd[1]: Started cri-containerd-7e91523d2513845bc0e2b5f5085adbe92b5b522ca5b447e607f17fc350c284d5.scope - libcontainer container 7e91523d2513845bc0e2b5f5085adbe92b5b522ca5b447e607f17fc350c284d5. Dec 13 01:17:47.470300 systemd[1]: cri-containerd-7e91523d2513845bc0e2b5f5085adbe92b5b522ca5b447e607f17fc350c284d5.scope: Deactivated successfully. Dec 13 01:17:47.507357 containerd[1454]: time="2024-12-13T01:17:47.507281033Z" level=info msg="StartContainer for \"7e91523d2513845bc0e2b5f5085adbe92b5b522ca5b447e607f17fc350c284d5\" returns successfully" Dec 13 01:17:47.544678 kubelet[2547]: I1213 01:17:47.544620 2547 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:17:47.560582 kubelet[2547]: I1213 01:17:47.560536 2547 topology_manager.go:215] "Topology Admit Handler" podUID="1ad11c91-3640-4584-9598-75845fd56d55" podNamespace="kube-system" podName="coredns-7db6d8ff4d-t2vvk" Dec 13 01:17:47.567823 systemd[1]: Created slice kubepods-burstable-pod1ad11c91_3640_4584_9598_75845fd56d55.slice - libcontainer container kubepods-burstable-pod1ad11c91_3640_4584_9598_75845fd56d55.slice. Dec 13 01:17:47.684458 kubelet[2547]: I1213 01:17:47.684214 2547 topology_manager.go:215] "Topology Admit Handler" podUID="eb5d9428-ec39-4450-a565-1f90a406fbcd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xpwx5" Dec 13 01:17:47.693058 systemd[1]: Created slice kubepods-burstable-podeb5d9428_ec39_4450_a565_1f90a406fbcd.slice - libcontainer container kubepods-burstable-podeb5d9428_ec39_4450_a565_1f90a406fbcd.slice. Dec 13 01:17:47.720873 kubelet[2547]: I1213 01:17:47.720841 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqf7s\" (UniqueName: \"kubernetes.io/projected/1ad11c91-3640-4584-9598-75845fd56d55-kube-api-access-jqf7s\") pod \"coredns-7db6d8ff4d-t2vvk\" (UID: \"1ad11c91-3640-4584-9598-75845fd56d55\") " pod="kube-system/coredns-7db6d8ff4d-t2vvk" Dec 13 01:17:47.720947 kubelet[2547]: I1213 01:17:47.720876 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ad11c91-3640-4584-9598-75845fd56d55-config-volume\") pod \"coredns-7db6d8ff4d-t2vvk\" (UID: \"1ad11c91-3640-4584-9598-75845fd56d55\") " pod="kube-system/coredns-7db6d8ff4d-t2vvk" Dec 13 01:17:47.731031 containerd[1454]: time="2024-12-13T01:17:47.730939985Z" level=info msg="shim disconnected" id=7e91523d2513845bc0e2b5f5085adbe92b5b522ca5b447e607f17fc350c284d5 namespace=k8s.io Dec 13 01:17:47.731031 containerd[1454]: time="2024-12-13T01:17:47.730997403Z" level=warning msg="cleaning up after shim disconnected" id=7e91523d2513845bc0e2b5f5085adbe92b5b522ca5b447e607f17fc350c284d5 namespace=k8s.io Dec 13 01:17:47.731031 containerd[1454]: time="2024-12-13T01:17:47.731007211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:47.821519 kubelet[2547]: I1213 01:17:47.821469 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb5d9428-ec39-4450-a565-1f90a406fbcd-config-volume\") pod \"coredns-7db6d8ff4d-xpwx5\" (UID: \"eb5d9428-ec39-4450-a565-1f90a406fbcd\") " pod="kube-system/coredns-7db6d8ff4d-xpwx5" Dec 13 01:17:47.821519 kubelet[2547]: I1213 01:17:47.821518 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgmcp\" (UniqueName: \"kubernetes.io/projected/eb5d9428-ec39-4450-a565-1f90a406fbcd-kube-api-access-mgmcp\") pod \"coredns-7db6d8ff4d-xpwx5\" (UID: \"eb5d9428-ec39-4450-a565-1f90a406fbcd\") " pod="kube-system/coredns-7db6d8ff4d-xpwx5" Dec 13 01:17:47.846933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e91523d2513845bc0e2b5f5085adbe92b5b522ca5b447e607f17fc350c284d5-rootfs.mount: Deactivated successfully. Dec 13 01:17:47.870577 kubelet[2547]: E1213 01:17:47.870553 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:47.871140 containerd[1454]: time="2024-12-13T01:17:47.871095357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t2vvk,Uid:1ad11c91-3640-4584-9598-75845fd56d55,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:47.898918 systemd[1]: run-netns-cni\x2d991258d4\x2dde74\x2d41b9\x2d44df\x2d7806a567d78e.mount: Deactivated successfully. Dec 13 01:17:47.899015 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1baf9b32989e05469e87423cfe34c46b302b365637795ef993881ee1ea77a2b-shm.mount: Deactivated successfully. Dec 13 01:17:47.899869 containerd[1454]: time="2024-12-13T01:17:47.899823178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t2vvk,Uid:1ad11c91-3640-4584-9598-75845fd56d55,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1baf9b32989e05469e87423cfe34c46b302b365637795ef993881ee1ea77a2b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:17:47.900092 kubelet[2547]: E1213 01:17:47.900056 2547 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1baf9b32989e05469e87423cfe34c46b302b365637795ef993881ee1ea77a2b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:17:47.900152 kubelet[2547]: E1213 01:17:47.900122 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1baf9b32989e05469e87423cfe34c46b302b365637795ef993881ee1ea77a2b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-t2vvk" Dec 13 01:17:47.900152 kubelet[2547]: E1213 01:17:47.900142 2547 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1baf9b32989e05469e87423cfe34c46b302b365637795ef993881ee1ea77a2b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-t2vvk" Dec 13 01:17:47.900207 kubelet[2547]: E1213 01:17:47.900188 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-t2vvk_kube-system(1ad11c91-3640-4584-9598-75845fd56d55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-t2vvk_kube-system(1ad11c91-3640-4584-9598-75845fd56d55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1baf9b32989e05469e87423cfe34c46b302b365637795ef993881ee1ea77a2b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-t2vvk" podUID="1ad11c91-3640-4584-9598-75845fd56d55" Dec 13 01:17:47.995762 kubelet[2547]: E1213 01:17:47.995609 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:47.996164 containerd[1454]: time="2024-12-13T01:17:47.996120945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xpwx5,Uid:eb5d9428-ec39-4450-a565-1f90a406fbcd,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:48.016623 containerd[1454]: time="2024-12-13T01:17:48.016580715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xpwx5,Uid:eb5d9428-ec39-4450-a565-1f90a406fbcd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff7ed3b3fe0aa6529df67ad68f1687bc443820207bd8e993a4323c1ddbef4fe7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:17:48.016872 kubelet[2547]: E1213 01:17:48.016810 2547 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff7ed3b3fe0aa6529df67ad68f1687bc443820207bd8e993a4323c1ddbef4fe7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:17:48.016872 kubelet[2547]: E1213 01:17:48.016876 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff7ed3b3fe0aa6529df67ad68f1687bc443820207bd8e993a4323c1ddbef4fe7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-xpwx5" Dec 13 01:17:48.017036 kubelet[2547]: E1213 01:17:48.016896 2547 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff7ed3b3fe0aa6529df67ad68f1687bc443820207bd8e993a4323c1ddbef4fe7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-xpwx5" Dec 13 01:17:48.017036 kubelet[2547]: E1213 01:17:48.016951 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xpwx5_kube-system(eb5d9428-ec39-4450-a565-1f90a406fbcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xpwx5_kube-system(eb5d9428-ec39-4450-a565-1f90a406fbcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff7ed3b3fe0aa6529df67ad68f1687bc443820207bd8e993a4323c1ddbef4fe7\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-xpwx5" podUID="eb5d9428-ec39-4450-a565-1f90a406fbcd" Dec 13 01:17:48.240078 kubelet[2547]: E1213 01:17:48.240051 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:48.241489 containerd[1454]: time="2024-12-13T01:17:48.241452789Z" level=info msg="CreateContainer within sandbox \"ddfdaf986eee219993ef86c8815059cc0f790e4889faf2be728f3c320c75b4c3\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 01:17:48.253799 containerd[1454]: time="2024-12-13T01:17:48.253675573Z" level=info msg="CreateContainer within sandbox \"ddfdaf986eee219993ef86c8815059cc0f790e4889faf2be728f3c320c75b4c3\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"87725f2cbfa9a20ec65eff70eb27416a75b878c56462618b0a030614481b34d7\"" Dec 13 01:17:48.254129 containerd[1454]: time="2024-12-13T01:17:48.254076488Z" level=info msg="StartContainer for \"87725f2cbfa9a20ec65eff70eb27416a75b878c56462618b0a030614481b34d7\"" Dec 13 01:17:48.286846 systemd[1]: Started cri-containerd-87725f2cbfa9a20ec65eff70eb27416a75b878c56462618b0a030614481b34d7.scope - libcontainer container 87725f2cbfa9a20ec65eff70eb27416a75b878c56462618b0a030614481b34d7. Dec 13 01:17:48.310159 containerd[1454]: time="2024-12-13T01:17:48.310111873Z" level=info msg="StartContainer for \"87725f2cbfa9a20ec65eff70eb27416a75b878c56462618b0a030614481b34d7\" returns successfully" Dec 13 01:17:49.243704 kubelet[2547]: E1213 01:17:49.243678 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:49.252500 kubelet[2547]: I1213 01:17:49.252449 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-z9nz9" podStartSLOduration=3.271225008 podStartE2EDuration="8.252432019s" podCreationTimestamp="2024-12-13 01:17:41 +0000 UTC" firstStartedPulling="2024-12-13 01:17:42.41324233 +0000 UTC m=+16.291048986" lastFinishedPulling="2024-12-13 01:17:47.394449331 +0000 UTC m=+21.272255997" observedRunningTime="2024-12-13 01:17:49.250405065 +0000 UTC m=+23.128211741" watchObservedRunningTime="2024-12-13 01:17:49.252432019 +0000 UTC m=+23.130238695" Dec 13 01:17:49.355419 systemd-networkd[1381]: flannel.1: Link UP Dec 13 01:17:49.355427 systemd-networkd[1381]: flannel.1: Gained carrier Dec 13 01:17:50.244810 kubelet[2547]: E1213 01:17:50.244785 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:50.500537 systemd[1]: Started sshd@7-10.0.0.147:22-10.0.0.1:40890.service - OpenSSH per-connection server daemon (10.0.0.1:40890). Dec 13 01:17:50.532488 sshd[3195]: Accepted publickey for core from 10.0.0.1 port 40890 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:17:50.534166 sshd[3195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:50.537848 systemd-logind[1438]: New session 8 of user core. Dec 13 01:17:50.549850 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:17:50.656989 sshd[3195]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:50.661648 systemd[1]: sshd@7-10.0.0.147:22-10.0.0.1:40890.service: Deactivated successfully. Dec 13 01:17:50.664448 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:17:50.665203 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:17:50.666265 systemd-logind[1438]: Removed session 8. Dec 13 01:17:51.148892 systemd-networkd[1381]: flannel.1: Gained IPv6LL Dec 13 01:17:55.668593 systemd[1]: Started sshd@8-10.0.0.147:22-10.0.0.1:40904.service - OpenSSH per-connection server daemon (10.0.0.1:40904). Dec 13 01:17:55.700898 sshd[3237]: Accepted publickey for core from 10.0.0.1 port 40904 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:17:55.702416 sshd[3237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:55.705904 systemd-logind[1438]: New session 9 of user core. Dec 13 01:17:55.716849 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:17:55.821587 sshd[3237]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:55.825663 systemd[1]: sshd@8-10.0.0.147:22-10.0.0.1:40904.service: Deactivated successfully. Dec 13 01:17:55.827669 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:17:55.828455 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:17:55.829283 systemd-logind[1438]: Removed session 9. Dec 13 01:17:59.198151 kubelet[2547]: E1213 01:17:59.198100 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:59.198779 containerd[1454]: time="2024-12-13T01:17:59.198545371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xpwx5,Uid:eb5d9428-ec39-4450-a565-1f90a406fbcd,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:59.223382 systemd-networkd[1381]: cni0: Link UP Dec 13 01:17:59.223394 systemd-networkd[1381]: cni0: Gained carrier Dec 13 01:17:59.226918 systemd-networkd[1381]: cni0: Lost carrier Dec 13 01:17:59.231110 systemd-networkd[1381]: veth85c69b6c: Link UP Dec 13 01:17:59.233295 kernel: cni0: port 1(veth85c69b6c) entered blocking state Dec 13 01:17:59.233359 kernel: cni0: port 1(veth85c69b6c) entered disabled state Dec 13 01:17:59.233373 kernel: veth85c69b6c: entered allmulticast mode Dec 13 01:17:59.234315 kernel: veth85c69b6c: entered promiscuous mode Dec 13 01:17:59.235843 kernel: cni0: port 1(veth85c69b6c) entered blocking state Dec 13 01:17:59.235879 kernel: cni0: port 1(veth85c69b6c) entered forwarding state Dec 13 01:17:59.237783 kernel: cni0: port 1(veth85c69b6c) entered disabled state Dec 13 01:17:59.245567 systemd-networkd[1381]: veth85c69b6c: Gained carrier Dec 13 01:17:59.246023 kernel: cni0: port 1(veth85c69b6c) entered blocking state Dec 13 01:17:59.246074 kernel: cni0: port 1(veth85c69b6c) entered forwarding state Dec 13 01:17:59.246246 systemd-networkd[1381]: cni0: Gained carrier Dec 13 01:17:59.247845 containerd[1454]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000ae8e8), "name":"cbr0", "type":"bridge"} Dec 13 01:17:59.247845 containerd[1454]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:17:59.268524 containerd[1454]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:17:59.268426505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:59.268785 containerd[1454]: time="2024-12-13T01:17:59.268492058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:59.268785 containerd[1454]: time="2024-12-13T01:17:59.268505483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:59.268785 containerd[1454]: time="2024-12-13T01:17:59.268613226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:59.295901 systemd[1]: Started cri-containerd-04e1ae0f7d245fe1c0203cf0fd824f2ec38c8ea18b0fbfe0fbaf8665f9d4d623.scope - libcontainer container 04e1ae0f7d245fe1c0203cf0fd824f2ec38c8ea18b0fbfe0fbaf8665f9d4d623. Dec 13 01:17:59.307674 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:17:59.332281 containerd[1454]: time="2024-12-13T01:17:59.332218219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xpwx5,Uid:eb5d9428-ec39-4450-a565-1f90a406fbcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"04e1ae0f7d245fe1c0203cf0fd824f2ec38c8ea18b0fbfe0fbaf8665f9d4d623\"" Dec 13 01:17:59.333128 kubelet[2547]: E1213 01:17:59.333095 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:59.335494 containerd[1454]: time="2024-12-13T01:17:59.335407871Z" level=info msg="CreateContainer within sandbox \"04e1ae0f7d245fe1c0203cf0fd824f2ec38c8ea18b0fbfe0fbaf8665f9d4d623\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:17:59.353355 containerd[1454]: time="2024-12-13T01:17:59.353281741Z" level=info msg="CreateContainer within sandbox \"04e1ae0f7d245fe1c0203cf0fd824f2ec38c8ea18b0fbfe0fbaf8665f9d4d623\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4eb7bdd95181d412dbdd153a3893e38e57710d9432d39bcaa5fe53e12f0cb2db\"" Dec 13 01:17:59.353903 containerd[1454]: time="2024-12-13T01:17:59.353879484Z" level=info msg="StartContainer for \"4eb7bdd95181d412dbdd153a3893e38e57710d9432d39bcaa5fe53e12f0cb2db\"" Dec 13 01:17:59.381861 systemd[1]: Started cri-containerd-4eb7bdd95181d412dbdd153a3893e38e57710d9432d39bcaa5fe53e12f0cb2db.scope - libcontainer container 4eb7bdd95181d412dbdd153a3893e38e57710d9432d39bcaa5fe53e12f0cb2db. Dec 13 01:17:59.415463 containerd[1454]: time="2024-12-13T01:17:59.415418336Z" level=info msg="StartContainer for \"4eb7bdd95181d412dbdd153a3893e38e57710d9432d39bcaa5fe53e12f0cb2db\" returns successfully" Dec 13 01:18:00.261239 kubelet[2547]: E1213 01:18:00.261116 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:00.270593 kubelet[2547]: I1213 01:18:00.270513 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xpwx5" podStartSLOduration=19.270494042 podStartE2EDuration="19.270494042s" podCreationTimestamp="2024-12-13 01:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:00.269915205 +0000 UTC m=+34.147721861" watchObservedRunningTime="2024-12-13 01:18:00.270494042 +0000 UTC m=+34.148300738" Dec 13 01:18:00.556930 systemd-networkd[1381]: veth85c69b6c: Gained IPv6LL Dec 13 01:18:00.835932 systemd[1]: Started sshd@9-10.0.0.147:22-10.0.0.1:56190.service - OpenSSH per-connection server daemon (10.0.0.1:56190). Dec 13 01:18:00.868009 sshd[3394]: Accepted publickey for core from 10.0.0.1 port 56190 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:00.869643 sshd[3394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:00.873486 systemd-logind[1438]: New session 10 of user core. Dec 13 01:18:00.879860 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:18:00.985503 sshd[3394]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:01.003363 systemd[1]: sshd@9-10.0.0.147:22-10.0.0.1:56190.service: Deactivated successfully. Dec 13 01:18:01.005013 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:18:01.006527 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:18:01.016279 systemd[1]: Started sshd@10-10.0.0.147:22-10.0.0.1:56192.service - OpenSSH per-connection server daemon (10.0.0.1:56192). Dec 13 01:18:01.017649 systemd-logind[1438]: Removed session 10. Dec 13 01:18:01.045472 sshd[3409]: Accepted publickey for core from 10.0.0.1 port 56192 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:01.047328 sshd[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:01.051505 systemd-logind[1438]: New session 11 of user core. Dec 13 01:18:01.066855 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:18:01.132869 systemd-networkd[1381]: cni0: Gained IPv6LL Dec 13 01:18:01.197577 kubelet[2547]: E1213 01:18:01.197542 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:01.197955 containerd[1454]: time="2024-12-13T01:18:01.197921699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t2vvk,Uid:1ad11c91-3640-4584-9598-75845fd56d55,Namespace:kube-system,Attempt:0,}" Dec 13 01:18:01.209479 sshd[3409]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:01.225747 systemd[1]: sshd@10-10.0.0.147:22-10.0.0.1:56192.service: Deactivated successfully. Dec 13 01:18:01.228279 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:18:01.231566 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:18:01.235897 systemd-networkd[1381]: veth7c565da8: Link UP Dec 13 01:18:01.237874 kernel: cni0: port 2(veth7c565da8) entered blocking state Dec 13 01:18:01.237933 kernel: cni0: port 2(veth7c565da8) entered disabled state Dec 13 01:18:01.238855 kernel: veth7c565da8: entered allmulticast mode Dec 13 01:18:01.238941 kernel: veth7c565da8: entered promiscuous mode Dec 13 01:18:01.240024 systemd[1]: Started sshd@11-10.0.0.147:22-10.0.0.1:56194.service - OpenSSH per-connection server daemon (10.0.0.1:56194). Dec 13 01:18:01.241546 systemd-logind[1438]: Removed session 11. Dec 13 01:18:01.247446 kernel: cni0: port 2(veth7c565da8) entered blocking state Dec 13 01:18:01.247545 kernel: cni0: port 2(veth7c565da8) entered forwarding state Dec 13 01:18:01.247224 systemd-networkd[1381]: veth7c565da8: Gained carrier Dec 13 01:18:01.249960 containerd[1454]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} Dec 13 01:18:01.249960 containerd[1454]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:18:01.262527 kubelet[2547]: E1213 01:18:01.262491 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:01.273003 containerd[1454]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:18:01.272867599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:01.273158 sshd[3439]: Accepted publickey for core from 10.0.0.1 port 56194 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:01.273540 containerd[1454]: time="2024-12-13T01:18:01.272969590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:01.273540 containerd[1454]: time="2024-12-13T01:18:01.272985009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:01.273540 containerd[1454]: time="2024-12-13T01:18:01.273106818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:01.275027 sshd[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:01.283067 systemd-logind[1438]: New session 12 of user core. Dec 13 01:18:01.302937 systemd[1]: Started cri-containerd-6cfdef40f17d2a4e48bff83f852e8b9fff450e97e4c0e0bc6532529d107bdfde.scope - libcontainer container 6cfdef40f17d2a4e48bff83f852e8b9fff450e97e4c0e0bc6532529d107bdfde. Dec 13 01:18:01.304266 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:18:01.315333 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:18:01.338385 containerd[1454]: time="2024-12-13T01:18:01.338336167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t2vvk,Uid:1ad11c91-3640-4584-9598-75845fd56d55,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cfdef40f17d2a4e48bff83f852e8b9fff450e97e4c0e0bc6532529d107bdfde\"" Dec 13 01:18:01.339076 kubelet[2547]: E1213 01:18:01.339055 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:01.340937 containerd[1454]: time="2024-12-13T01:18:01.340896025Z" level=info msg="CreateContainer within sandbox \"6cfdef40f17d2a4e48bff83f852e8b9fff450e97e4c0e0bc6532529d107bdfde\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:18:01.356130 containerd[1454]: time="2024-12-13T01:18:01.355906749Z" level=info msg="CreateContainer within sandbox \"6cfdef40f17d2a4e48bff83f852e8b9fff450e97e4c0e0bc6532529d107bdfde\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9c973e7567167195dd9de587da08fd552284a32c154401e1de51494d2f89fc2\"" Dec 13 01:18:01.357761 containerd[1454]: time="2024-12-13T01:18:01.357709075Z" level=info msg="StartContainer for \"f9c973e7567167195dd9de587da08fd552284a32c154401e1de51494d2f89fc2\"" Dec 13 01:18:01.387872 systemd[1]: Started cri-containerd-f9c973e7567167195dd9de587da08fd552284a32c154401e1de51494d2f89fc2.scope - libcontainer container f9c973e7567167195dd9de587da08fd552284a32c154401e1de51494d2f89fc2. Dec 13 01:18:01.419534 sshd[3439]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:01.425327 systemd[1]: sshd@11-10.0.0.147:22-10.0.0.1:56194.service: Deactivated successfully. Dec 13 01:18:01.426688 containerd[1454]: time="2024-12-13T01:18:01.426655995Z" level=info msg="StartContainer for \"f9c973e7567167195dd9de587da08fd552284a32c154401e1de51494d2f89fc2\" returns successfully" Dec 13 01:18:01.428280 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:18:01.429079 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:18:01.430174 systemd-logind[1438]: Removed session 12. Dec 13 01:18:02.265367 kubelet[2547]: E1213 01:18:02.265338 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:02.265778 kubelet[2547]: E1213 01:18:02.265421 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:02.275247 kubelet[2547]: I1213 01:18:02.275193 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-t2vvk" podStartSLOduration=21.275179157 podStartE2EDuration="21.275179157s" podCreationTimestamp="2024-12-13 01:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:02.275167665 +0000 UTC m=+36.152974331" watchObservedRunningTime="2024-12-13 01:18:02.275179157 +0000 UTC m=+36.152985823" Dec 13 01:18:02.348875 systemd-networkd[1381]: veth7c565da8: Gained IPv6LL Dec 13 01:18:06.435795 systemd[1]: Started sshd@12-10.0.0.147:22-10.0.0.1:56198.service - OpenSSH per-connection server daemon (10.0.0.1:56198). Dec 13 01:18:06.465951 sshd[3569]: Accepted publickey for core from 10.0.0.1 port 56198 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:06.467447 sshd[3569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:06.471311 systemd-logind[1438]: New session 13 of user core. Dec 13 01:18:06.481873 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:18:06.585544 sshd[3569]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:06.592325 systemd[1]: sshd@12-10.0.0.147:22-10.0.0.1:56198.service: Deactivated successfully. Dec 13 01:18:06.593851 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:18:06.595226 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:18:06.603949 systemd[1]: Started sshd@13-10.0.0.147:22-10.0.0.1:56200.service - OpenSSH per-connection server daemon (10.0.0.1:56200). Dec 13 01:18:06.604694 systemd-logind[1438]: Removed session 13. Dec 13 01:18:06.630538 sshd[3583]: Accepted publickey for core from 10.0.0.1 port 56200 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:06.631985 sshd[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:06.635607 systemd-logind[1438]: New session 14 of user core. Dec 13 01:18:06.644831 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:18:06.801936 sshd[3583]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:06.814539 systemd[1]: sshd@13-10.0.0.147:22-10.0.0.1:56200.service: Deactivated successfully. Dec 13 01:18:06.816247 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:18:06.817662 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:18:06.822977 systemd[1]: Started sshd@14-10.0.0.147:22-10.0.0.1:56202.service - OpenSSH per-connection server daemon (10.0.0.1:56202). Dec 13 01:18:06.823977 systemd-logind[1438]: Removed session 14. Dec 13 01:18:06.849269 sshd[3595]: Accepted publickey for core from 10.0.0.1 port 56202 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:06.850741 sshd[3595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:06.854196 systemd-logind[1438]: New session 15 of user core. Dec 13 01:18:06.864862 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:18:07.877707 kubelet[2547]: E1213 01:18:07.877575 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:08.009589 sshd[3595]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:08.018651 systemd[1]: sshd@14-10.0.0.147:22-10.0.0.1:56202.service: Deactivated successfully. Dec 13 01:18:08.021156 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:18:08.022754 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:18:08.029992 systemd[1]: Started sshd@15-10.0.0.147:22-10.0.0.1:46602.service - OpenSSH per-connection server daemon (10.0.0.1:46602). Dec 13 01:18:08.030820 systemd-logind[1438]: Removed session 15. Dec 13 01:18:08.057855 sshd[3617]: Accepted publickey for core from 10.0.0.1 port 46602 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:08.059424 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:08.063320 systemd-logind[1438]: New session 16 of user core. Dec 13 01:18:08.077840 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:18:08.281024 sshd[3617]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:08.289894 systemd[1]: sshd@15-10.0.0.147:22-10.0.0.1:46602.service: Deactivated successfully. Dec 13 01:18:08.291846 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:18:08.293490 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:18:08.301189 systemd[1]: Started sshd@16-10.0.0.147:22-10.0.0.1:46618.service - OpenSSH per-connection server daemon (10.0.0.1:46618). Dec 13 01:18:08.302117 systemd-logind[1438]: Removed session 16. Dec 13 01:18:08.327451 sshd[3630]: Accepted publickey for core from 10.0.0.1 port 46618 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:08.329228 sshd[3630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:08.333075 systemd-logind[1438]: New session 17 of user core. Dec 13 01:18:08.348846 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:18:08.455082 sshd[3630]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:08.459471 systemd[1]: sshd@16-10.0.0.147:22-10.0.0.1:46618.service: Deactivated successfully. Dec 13 01:18:08.461648 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:18:08.462319 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:18:08.463283 systemd-logind[1438]: Removed session 17. Dec 13 01:18:13.466557 systemd[1]: Started sshd@17-10.0.0.147:22-10.0.0.1:46624.service - OpenSSH per-connection server daemon (10.0.0.1:46624). Dec 13 01:18:13.497636 sshd[3673]: Accepted publickey for core from 10.0.0.1 port 46624 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:13.499181 sshd[3673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:13.503331 systemd-logind[1438]: New session 18 of user core. Dec 13 01:18:13.512857 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:18:13.617530 sshd[3673]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:13.621617 systemd[1]: sshd@17-10.0.0.147:22-10.0.0.1:46624.service: Deactivated successfully. Dec 13 01:18:13.623886 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:18:13.624654 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:18:13.625545 systemd-logind[1438]: Removed session 18. Dec 13 01:18:18.633178 systemd[1]: Started sshd@18-10.0.0.147:22-10.0.0.1:47802.service - OpenSSH per-connection server daemon (10.0.0.1:47802). Dec 13 01:18:18.664327 sshd[3708]: Accepted publickey for core from 10.0.0.1 port 47802 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:18.665818 sshd[3708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:18.669859 systemd-logind[1438]: New session 19 of user core. Dec 13 01:18:18.685885 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:18:18.785411 sshd[3708]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:18.788931 systemd[1]: sshd@18-10.0.0.147:22-10.0.0.1:47802.service: Deactivated successfully. Dec 13 01:18:18.790591 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:18:18.791200 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:18:18.792003 systemd-logind[1438]: Removed session 19. Dec 13 01:18:23.802901 systemd[1]: Started sshd@19-10.0.0.147:22-10.0.0.1:47808.service - OpenSSH per-connection server daemon (10.0.0.1:47808). Dec 13 01:18:23.834051 sshd[3743]: Accepted publickey for core from 10.0.0.1 port 47808 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:23.835553 sshd[3743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:23.839465 systemd-logind[1438]: New session 20 of user core. Dec 13 01:18:23.845857 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:18:23.948202 sshd[3743]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:23.952311 systemd[1]: sshd@19-10.0.0.147:22-10.0.0.1:47808.service: Deactivated successfully. Dec 13 01:18:23.954507 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:18:23.955174 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:18:23.956027 systemd-logind[1438]: Removed session 20. Dec 13 01:18:28.959055 systemd[1]: Started sshd@20-10.0.0.147:22-10.0.0.1:60876.service - OpenSSH per-connection server daemon (10.0.0.1:60876). Dec 13 01:18:28.990940 sshd[3780]: Accepted publickey for core from 10.0.0.1 port 60876 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:28.992605 sshd[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:28.996532 systemd-logind[1438]: New session 21 of user core. Dec 13 01:18:29.004853 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:18:29.109378 sshd[3780]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:29.113659 systemd[1]: sshd@20-10.0.0.147:22-10.0.0.1:60876.service: Deactivated successfully. Dec 13 01:18:29.115565 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:18:29.116207 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:18:29.117165 systemd-logind[1438]: Removed session 21.