Dec 13 05:28:49.041980 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 05:28:49.042023 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:28:49.042038 kernel: BIOS-provided physical RAM map: Dec 13 05:28:49.042055 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 05:28:49.042065 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 05:28:49.042075 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 05:28:49.042087 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 05:28:49.042097 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 05:28:49.042108 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 05:28:49.042118 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 05:28:49.042129 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 05:28:49.042139 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 05:28:49.042155 kernel: NX (Execute Disable) protection: active Dec 13 05:28:49.042166 kernel: APIC: Static calls initialized Dec 13 05:28:49.042178 kernel: SMBIOS 2.8 present. Dec 13 05:28:49.042190 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 05:28:49.042202 kernel: Hypervisor detected: KVM Dec 13 05:28:49.042218 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 05:28:49.042230 kernel: kvm-clock: using sched offset of 4790060703 cycles Dec 13 05:28:49.042242 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 05:28:49.042253 kernel: tsc: Detected 2499.998 MHz processor Dec 13 05:28:49.042265 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 05:28:49.042277 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 05:28:49.042288 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 05:28:49.042300 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 05:28:49.042311 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 05:28:49.042327 kernel: Using GB pages for direct mapping Dec 13 05:28:49.042339 kernel: ACPI: Early table checksum verification disabled Dec 13 05:28:49.042350 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 05:28:49.042362 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:28:49.042374 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:28:49.042385 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:28:49.042397 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 05:28:49.042409 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:28:49.042420 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:28:49.042436 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:28:49.042448 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:28:49.042459 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 05:28:49.042471 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 05:28:49.042483 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 05:28:49.042501 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 05:28:49.042513 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 05:28:49.042530 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 05:28:49.042542 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 05:28:49.042554 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 05:28:49.042566 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 05:28:49.042578 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 05:28:49.042590 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 05:28:49.042602 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 05:28:49.042614 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 05:28:49.042631 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 05:28:49.042643 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 05:28:49.042655 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 05:28:49.042666 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 05:28:49.042678 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 05:28:49.042690 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 05:28:49.042702 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 05:28:49.042714 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 05:28:49.042726 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 05:28:49.042743 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 05:28:49.042755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 05:28:49.042767 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 05:28:49.042779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 05:28:49.042806 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 05:28:49.042821 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 05:28:49.042834 kernel: Zone ranges: Dec 13 05:28:49.042846 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 05:28:49.042858 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 05:28:49.042889 kernel: Normal empty Dec 13 05:28:49.042902 kernel: Movable zone start for each node Dec 13 05:28:49.042914 kernel: Early memory node ranges Dec 13 05:28:49.042926 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 05:28:49.042938 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 05:28:49.042950 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 05:28:49.042962 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 05:28:49.042974 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 05:28:49.042986 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 05:28:49.042998 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 05:28:49.043015 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 05:28:49.043028 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 05:28:49.043040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 05:28:49.043052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 05:28:49.043064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 05:28:49.043076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 05:28:49.043087 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 05:28:49.043100 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 05:28:49.043111 kernel: TSC deadline timer available Dec 13 05:28:49.043128 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 05:28:49.043141 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 05:28:49.043153 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 05:28:49.043164 kernel: Booting paravirtualized kernel on KVM Dec 13 05:28:49.043177 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 05:28:49.043189 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 13 05:28:49.043201 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 05:28:49.043213 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 05:28:49.043225 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 05:28:49.043242 kernel: kvm-guest: PV spinlocks enabled Dec 13 05:28:49.043254 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 05:28:49.043267 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:28:49.043280 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 05:28:49.043292 kernel: random: crng init done Dec 13 05:28:49.043304 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 05:28:49.043316 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 05:28:49.043329 kernel: Fallback order for Node 0: 0 Dec 13 05:28:49.043346 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 05:28:49.043358 kernel: Policy zone: DMA32 Dec 13 05:28:49.043370 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 05:28:49.043382 kernel: software IO TLB: area num 16. Dec 13 05:28:49.043395 kernel: Memory: 1901540K/2096616K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 194820K reserved, 0K cma-reserved) Dec 13 05:28:49.043407 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 05:28:49.043419 kernel: Kernel/User page tables isolation: enabled Dec 13 05:28:49.043431 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 05:28:49.043443 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 05:28:49.043460 kernel: Dynamic Preempt: voluntary Dec 13 05:28:49.043472 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 05:28:49.043486 kernel: rcu: RCU event tracing is enabled. Dec 13 05:28:49.043498 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 05:28:49.043510 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 05:28:49.043536 kernel: Rude variant of Tasks RCU enabled. Dec 13 05:28:49.043563 kernel: Tracing variant of Tasks RCU enabled. Dec 13 05:28:49.043576 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 05:28:49.043589 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 05:28:49.043602 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 05:28:49.043623 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 05:28:49.043636 kernel: Console: colour VGA+ 80x25 Dec 13 05:28:49.043653 kernel: printk: console [tty0] enabled Dec 13 05:28:49.043666 kernel: printk: console [ttyS0] enabled Dec 13 05:28:49.043679 kernel: ACPI: Core revision 20230628 Dec 13 05:28:49.043692 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 05:28:49.043704 kernel: x2apic enabled Dec 13 05:28:49.043722 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 05:28:49.043735 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 05:28:49.043748 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 05:28:49.043760 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 05:28:49.043773 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 05:28:49.043786 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 05:28:49.044602 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 05:28:49.044618 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 05:28:49.044631 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 05:28:49.044645 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 05:28:49.044666 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 05:28:49.044679 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 05:28:49.044692 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 05:28:49.044705 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 05:28:49.044717 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 05:28:49.044730 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 05:28:49.044742 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 05:28:49.044755 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 05:28:49.044768 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 05:28:49.044781 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 05:28:49.044827 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 05:28:49.044841 kernel: Freeing SMP alternatives memory: 32K Dec 13 05:28:49.044854 kernel: pid_max: default: 32768 minimum: 301 Dec 13 05:28:49.044878 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 05:28:49.044891 kernel: landlock: Up and running. Dec 13 05:28:49.044904 kernel: SELinux: Initializing. Dec 13 05:28:49.044916 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 05:28:49.044929 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 05:28:49.044942 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 05:28:49.044955 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:28:49.044968 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:28:49.044988 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:28:49.045001 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 05:28:49.045014 kernel: signal: max sigframe size: 1776 Dec 13 05:28:49.045027 kernel: rcu: Hierarchical SRCU implementation. Dec 13 05:28:49.045041 kernel: rcu: Max phase no-delay instances is 400. Dec 13 05:28:49.045054 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 05:28:49.045067 kernel: smp: Bringing up secondary CPUs ... Dec 13 05:28:49.045080 kernel: smpboot: x86: Booting SMP configuration: Dec 13 05:28:49.045092 kernel: .... node #0, CPUs: #1 Dec 13 05:28:49.045110 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 05:28:49.045123 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 05:28:49.045136 kernel: smpboot: Max logical packages: 16 Dec 13 05:28:49.045149 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 05:28:49.045162 kernel: devtmpfs: initialized Dec 13 05:28:49.045175 kernel: x86/mm: Memory block size: 128MB Dec 13 05:28:49.045188 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 05:28:49.045201 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 05:28:49.045214 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 05:28:49.045231 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 05:28:49.045244 kernel: audit: initializing netlink subsys (disabled) Dec 13 05:28:49.045257 kernel: audit: type=2000 audit(1734067727.818:1): state=initialized audit_enabled=0 res=1 Dec 13 05:28:49.045270 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 05:28:49.045283 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 05:28:49.045295 kernel: cpuidle: using governor menu Dec 13 05:28:49.045308 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 05:28:49.045321 kernel: dca service started, version 1.12.1 Dec 13 05:28:49.045334 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 05:28:49.045352 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 05:28:49.045365 kernel: PCI: Using configuration type 1 for base access Dec 13 05:28:49.045378 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 05:28:49.045391 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 05:28:49.045404 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 05:28:49.045417 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 05:28:49.045430 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 05:28:49.045443 kernel: ACPI: Added _OSI(Module Device) Dec 13 05:28:49.045456 kernel: ACPI: Added _OSI(Processor Device) Dec 13 05:28:49.045474 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 05:28:49.045487 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 05:28:49.045500 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 05:28:49.045512 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 05:28:49.045525 kernel: ACPI: Interpreter enabled Dec 13 05:28:49.045538 kernel: ACPI: PM: (supports S0 S5) Dec 13 05:28:49.045551 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 05:28:49.045564 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 05:28:49.045577 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 05:28:49.045594 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 05:28:49.045607 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 05:28:49.045932 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 05:28:49.046117 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 05:28:49.046287 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 05:28:49.046306 kernel: PCI host bridge to bus 0000:00 Dec 13 05:28:49.046505 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 05:28:49.046670 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 05:28:49.047208 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 05:28:49.047373 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 05:28:49.047526 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 05:28:49.047678 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 05:28:49.047854 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 05:28:49.048080 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 05:28:49.048278 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 05:28:49.048448 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 05:28:49.048613 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 05:28:49.048777 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 05:28:49.048997 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 05:28:49.049187 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 05:28:49.049371 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 05:28:49.049559 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 05:28:49.049768 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 05:28:49.049999 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 05:28:49.050174 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 05:28:49.050356 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 05:28:49.050524 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 05:28:49.050717 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 05:28:49.050919 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 05:28:49.051115 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 05:28:49.051287 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 05:28:49.051478 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 05:28:49.051685 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 05:28:49.051997 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 05:28:49.052170 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 05:28:49.052365 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 05:28:49.052535 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 05:28:49.052715 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 05:28:49.052942 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 05:28:49.053131 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 05:28:49.053332 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 05:28:49.053502 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 05:28:49.053701 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 05:28:49.055366 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 05:28:49.055561 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 05:28:49.055728 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 05:28:49.056068 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 05:28:49.056235 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 05:28:49.056396 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 05:28:49.056572 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 05:28:49.056737 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 05:28:49.056969 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 05:28:49.057141 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 05:28:49.057320 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 05:28:49.059906 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 05:28:49.060127 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:28:49.060323 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 05:28:49.060521 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 05:28:49.060721 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 05:28:49.062999 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 05:28:49.063195 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 05:28:49.063403 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 05:28:49.063579 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 05:28:49.063929 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 05:28:49.064101 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 05:28:49.064267 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:28:49.064468 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 05:28:49.064641 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 05:28:49.064845 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 05:28:49.065025 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 05:28:49.065186 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:28:49.065354 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 05:28:49.065514 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 05:28:49.065688 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:28:49.067006 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 05:28:49.067175 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 05:28:49.067358 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:28:49.067537 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 05:28:49.067699 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 05:28:49.068875 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:28:49.069063 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 05:28:49.069238 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 05:28:49.069402 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:28:49.070114 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 05:28:49.070290 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 05:28:49.070466 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:28:49.070485 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 05:28:49.070499 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 05:28:49.070511 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 05:28:49.070537 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 05:28:49.070559 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 05:28:49.070572 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 05:28:49.070585 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 05:28:49.070598 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 05:28:49.070611 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 05:28:49.070624 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 05:28:49.070637 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 05:28:49.070650 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 05:28:49.070663 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 05:28:49.070681 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 05:28:49.070695 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 05:28:49.070708 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 05:28:49.070721 kernel: iommu: Default domain type: Translated Dec 13 05:28:49.070734 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 05:28:49.070747 kernel: PCI: Using ACPI for IRQ routing Dec 13 05:28:49.070760 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 05:28:49.070773 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 05:28:49.070786 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 05:28:49.072990 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 05:28:49.073171 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 05:28:49.073343 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 05:28:49.073363 kernel: vgaarb: loaded Dec 13 05:28:49.073377 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 05:28:49.073391 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 05:28:49.073404 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 05:28:49.073417 kernel: pnp: PnP ACPI init Dec 13 05:28:49.073607 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 05:28:49.073629 kernel: pnp: PnP ACPI: found 5 devices Dec 13 05:28:49.073643 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 05:28:49.073656 kernel: NET: Registered PF_INET protocol family Dec 13 05:28:49.073670 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 05:28:49.073683 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 05:28:49.073697 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 05:28:49.073710 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 05:28:49.073731 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 05:28:49.073744 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 05:28:49.073757 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 05:28:49.073770 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 05:28:49.073783 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 05:28:49.075835 kernel: NET: Registered PF_XDP protocol family Dec 13 05:28:49.076040 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 05:28:49.076215 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 05:28:49.076393 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 05:28:49.076561 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 05:28:49.076727 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 05:28:49.076943 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 05:28:49.077112 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 05:28:49.077276 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 05:28:49.077447 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 05:28:49.077607 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 05:28:49.077769 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 05:28:49.079987 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 05:28:49.080184 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 05:28:49.080354 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 05:28:49.080522 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 05:28:49.080686 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 05:28:49.082979 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 05:28:49.083174 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 05:28:49.083353 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 05:28:49.083519 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 05:28:49.083684 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 05:28:49.085956 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:28:49.086140 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 05:28:49.086307 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 05:28:49.086484 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 05:28:49.086651 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:28:49.086853 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 05:28:49.087042 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 05:28:49.087207 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 05:28:49.087384 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:28:49.087587 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 05:28:49.087753 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 05:28:49.090054 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 05:28:49.090244 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:28:49.090414 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 05:28:49.090578 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 05:28:49.090741 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 05:28:49.092981 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:28:49.093171 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 05:28:49.093359 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 05:28:49.093533 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 05:28:49.093701 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:28:49.096003 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 05:28:49.096268 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 05:28:49.096449 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 05:28:49.098962 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:28:49.099144 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 05:28:49.099310 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 05:28:49.099474 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 05:28:49.099635 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:28:49.099820 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 05:28:49.099998 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 05:28:49.100153 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 05:28:49.100317 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 05:28:49.100476 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 05:28:49.100627 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 05:28:49.100909 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 05:28:49.101072 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 05:28:49.101226 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:28:49.101394 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 05:28:49.101585 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 05:28:49.101740 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 05:28:49.101924 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:28:49.102093 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 05:28:49.102248 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 05:28:49.102401 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:28:49.102576 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 05:28:49.102733 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 05:28:49.104953 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:28:49.105137 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 05:28:49.105293 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 05:28:49.105446 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:28:49.105612 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 05:28:49.105775 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 05:28:49.107996 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:28:49.108172 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 05:28:49.108327 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 05:28:49.108482 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:28:49.108649 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 05:28:49.108825 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 05:28:49.109008 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:28:49.109030 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 05:28:49.109044 kernel: PCI: CLS 0 bytes, default 64 Dec 13 05:28:49.109058 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 05:28:49.109087 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 05:28:49.109109 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 05:28:49.109132 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 05:28:49.109147 kernel: Initialise system trusted keyrings Dec 13 05:28:49.109167 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 05:28:49.109181 kernel: Key type asymmetric registered Dec 13 05:28:49.109230 kernel: Asymmetric key parser 'x509' registered Dec 13 05:28:49.109244 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 05:28:49.109259 kernel: io scheduler mq-deadline registered Dec 13 05:28:49.109272 kernel: io scheduler kyber registered Dec 13 05:28:49.109286 kernel: io scheduler bfq registered Dec 13 05:28:49.109492 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 05:28:49.109662 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 05:28:49.111886 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:28:49.112069 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 05:28:49.112234 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 05:28:49.112400 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:28:49.112571 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 05:28:49.112735 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 05:28:49.112940 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:28:49.113110 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 05:28:49.113274 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 05:28:49.113444 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:28:49.113615 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 05:28:49.113779 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 05:28:49.116008 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:28:49.116191 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 05:28:49.116359 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 05:28:49.116526 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:28:49.116698 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 05:28:49.116898 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 05:28:49.117078 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:28:49.117249 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 05:28:49.117422 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 05:28:49.117590 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:28:49.117612 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 05:28:49.117628 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 05:28:49.117649 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 05:28:49.117663 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 05:28:49.117677 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 05:28:49.117691 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 05:28:49.117705 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 05:28:49.117720 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 05:28:49.117734 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 05:28:49.119976 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 05:28:49.120140 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 05:28:49.120305 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T05:28:48 UTC (1734067728) Dec 13 05:28:49.120463 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 05:28:49.120483 kernel: intel_pstate: CPU model not supported Dec 13 05:28:49.120497 kernel: NET: Registered PF_INET6 protocol family Dec 13 05:28:49.120511 kernel: Segment Routing with IPv6 Dec 13 05:28:49.120525 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 05:28:49.120540 kernel: NET: Registered PF_PACKET protocol family Dec 13 05:28:49.120553 kernel: Key type dns_resolver registered Dec 13 05:28:49.120574 kernel: IPI shorthand broadcast: enabled Dec 13 05:28:49.120588 kernel: sched_clock: Marking stable (1335004333, 234790950)->(1714291011, -144495728) Dec 13 05:28:49.120602 kernel: registered taskstats version 1 Dec 13 05:28:49.120616 kernel: Loading compiled-in X.509 certificates Dec 13 05:28:49.120630 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 05:28:49.120643 kernel: Key type .fscrypt registered Dec 13 05:28:49.120662 kernel: Key type fscrypt-provisioning registered Dec 13 05:28:49.120676 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 05:28:49.120689 kernel: ima: Allocated hash algorithm: sha1 Dec 13 05:28:49.120708 kernel: ima: No architecture policies found Dec 13 05:28:49.120722 kernel: clk: Disabling unused clocks Dec 13 05:28:49.120736 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 05:28:49.120750 kernel: Write protecting the kernel read-only data: 36864k Dec 13 05:28:49.120763 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 05:28:49.120777 kernel: Run /init as init process Dec 13 05:28:49.120804 kernel: with arguments: Dec 13 05:28:49.120835 kernel: /init Dec 13 05:28:49.120849 kernel: with environment: Dec 13 05:28:49.120880 kernel: HOME=/ Dec 13 05:28:49.120895 kernel: TERM=linux Dec 13 05:28:49.120908 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 05:28:49.120925 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 05:28:49.120943 systemd[1]: Detected virtualization kvm. Dec 13 05:28:49.120958 systemd[1]: Detected architecture x86-64. Dec 13 05:28:49.120972 systemd[1]: Running in initrd. Dec 13 05:28:49.120987 systemd[1]: No hostname configured, using default hostname. Dec 13 05:28:49.121008 systemd[1]: Hostname set to . Dec 13 05:28:49.121023 systemd[1]: Initializing machine ID from VM UUID. Dec 13 05:28:49.121037 systemd[1]: Queued start job for default target initrd.target. Dec 13 05:28:49.121052 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:28:49.121066 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:28:49.121082 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 05:28:49.121096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 05:28:49.121110 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 05:28:49.121131 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 05:28:49.121149 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 05:28:49.121163 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 05:28:49.121178 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:28:49.121193 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:28:49.121207 systemd[1]: Reached target paths.target - Path Units. Dec 13 05:28:49.121227 systemd[1]: Reached target slices.target - Slice Units. Dec 13 05:28:49.121242 systemd[1]: Reached target swap.target - Swaps. Dec 13 05:28:49.121256 systemd[1]: Reached target timers.target - Timer Units. Dec 13 05:28:49.121271 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 05:28:49.121286 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 05:28:49.121301 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 05:28:49.121315 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 05:28:49.121330 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:28:49.121345 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 05:28:49.121364 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:28:49.121379 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 05:28:49.121394 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 05:28:49.121409 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 05:28:49.121423 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 05:28:49.121437 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 05:28:49.121452 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 05:28:49.121467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 05:28:49.121481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:28:49.121501 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 05:28:49.121572 systemd-journald[201]: Collecting audit messages is disabled. Dec 13 05:28:49.121607 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:28:49.121622 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 05:28:49.121644 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 05:28:49.121658 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 05:28:49.121672 kernel: Bridge firewalling registered Dec 13 05:28:49.121687 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 05:28:49.121708 systemd-journald[201]: Journal started Dec 13 05:28:49.121737 systemd-journald[201]: Runtime Journal (/run/log/journal/9ba954ff02a9430e92cad752253ff53b) is 4.7M, max 38.0M, 33.2M free. Dec 13 05:28:49.052030 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 05:28:49.104056 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 05:28:49.185826 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 05:28:49.186497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:28:49.190558 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 05:28:49.204087 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:28:49.205997 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 05:28:49.209994 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 05:28:49.215694 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 05:28:49.238556 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:28:49.244455 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:28:49.245613 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:28:49.247453 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:28:49.255090 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 05:28:49.259166 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 05:28:49.274603 dracut-cmdline[236]: dracut-dracut-053 Dec 13 05:28:49.281622 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:28:49.312536 systemd-resolved[238]: Positive Trust Anchors: Dec 13 05:28:49.312562 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 05:28:49.312607 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 05:28:49.317114 systemd-resolved[238]: Defaulting to hostname 'linux'. Dec 13 05:28:49.318873 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 05:28:49.324952 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:28:49.400917 kernel: SCSI subsystem initialized Dec 13 05:28:49.412831 kernel: Loading iSCSI transport class v2.0-870. Dec 13 05:28:49.427934 kernel: iscsi: registered transport (tcp) Dec 13 05:28:49.456186 kernel: iscsi: registered transport (qla4xxx) Dec 13 05:28:49.456309 kernel: QLogic iSCSI HBA Driver Dec 13 05:28:49.516945 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 05:28:49.524173 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 05:28:49.570532 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 05:28:49.570687 kernel: device-mapper: uevent: version 1.0.3 Dec 13 05:28:49.573836 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 05:28:49.623847 kernel: raid6: sse2x4 gen() 7765 MB/s Dec 13 05:28:49.641908 kernel: raid6: sse2x2 gen() 5436 MB/s Dec 13 05:28:49.669027 kernel: raid6: sse2x1 gen() 5386 MB/s Dec 13 05:28:49.670868 kernel: raid6: using algorithm sse2x4 gen() 7765 MB/s Dec 13 05:28:49.687563 kernel: raid6: .... xor() 7481 MB/s, rmw enabled Dec 13 05:28:49.687687 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 05:28:49.717868 kernel: xor: automatically using best checksumming function avx Dec 13 05:28:49.911857 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 05:28:49.927781 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 05:28:49.936085 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:28:49.966434 systemd-udevd[421]: Using default interface naming scheme 'v255'. Dec 13 05:28:49.974022 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:28:49.984041 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 05:28:50.008537 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Dec 13 05:28:50.049183 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 05:28:50.056086 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 05:28:50.169673 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:28:50.179132 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 05:28:50.212735 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 05:28:50.215232 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 05:28:50.218061 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:28:50.219150 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 05:28:50.229153 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 05:28:50.258435 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 05:28:50.304819 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 13 05:28:50.380822 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 05:28:50.380873 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 05:28:50.381102 kernel: AVX version of gcm_enc/dec engaged. Dec 13 05:28:50.381124 kernel: AES CTR mode by8 optimization enabled Dec 13 05:28:50.381142 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 05:28:50.381160 kernel: GPT:17805311 != 125829119 Dec 13 05:28:50.381177 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 05:28:50.381194 kernel: GPT:17805311 != 125829119 Dec 13 05:28:50.381219 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 05:28:50.381237 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:28:50.380341 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 05:28:50.380551 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:28:50.381822 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:28:50.382545 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 05:28:50.382720 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:28:50.383590 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:28:50.394162 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:28:50.416817 kernel: libata version 3.00 loaded. Dec 13 05:28:50.445767 kernel: ACPI: bus type USB registered Dec 13 05:28:50.445895 kernel: usbcore: registered new interface driver usbfs Dec 13 05:28:50.447821 kernel: usbcore: registered new interface driver hub Dec 13 05:28:50.447872 kernel: usbcore: registered new device driver usb Dec 13 05:28:50.468035 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (470) Dec 13 05:28:50.474873 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 05:28:50.478448 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 05:28:50.478677 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 05:28:50.478925 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 05:28:50.479133 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 05:28:50.479349 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 05:28:50.479552 kernel: hub 1-0:1.0: USB hub found Dec 13 05:28:50.479787 kernel: hub 1-0:1.0: 4 ports detected Dec 13 05:28:50.480083 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 05:28:50.480300 kernel: hub 2-0:1.0: USB hub found Dec 13 05:28:50.480506 kernel: hub 2-0:1.0: 4 ports detected Dec 13 05:28:50.480708 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (477) Dec 13 05:28:50.507863 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 05:28:50.532157 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 05:28:50.532195 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 05:28:50.532431 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 05:28:50.532665 kernel: scsi host0: ahci Dec 13 05:28:50.532939 kernel: scsi host1: ahci Dec 13 05:28:50.533173 kernel: scsi host2: ahci Dec 13 05:28:50.533371 kernel: scsi host3: ahci Dec 13 05:28:50.533564 kernel: scsi host4: ahci Dec 13 05:28:50.533755 kernel: scsi host5: ahci Dec 13 05:28:50.534630 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Dec 13 05:28:50.534653 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Dec 13 05:28:50.534671 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Dec 13 05:28:50.534698 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Dec 13 05:28:50.534716 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Dec 13 05:28:50.534734 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Dec 13 05:28:50.512209 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 05:28:50.609232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:28:50.617360 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 05:28:50.624736 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 05:28:50.630861 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 05:28:50.631734 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 05:28:50.640129 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 05:28:50.644001 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:28:50.650612 disk-uuid[563]: Primary Header is updated. Dec 13 05:28:50.650612 disk-uuid[563]: Secondary Entries is updated. Dec 13 05:28:50.650612 disk-uuid[563]: Secondary Header is updated. Dec 13 05:28:50.658384 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:28:50.660852 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:28:50.692880 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:28:50.724850 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 05:28:50.846203 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 05:28:50.846286 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 05:28:50.851193 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 05:28:50.851258 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 05:28:50.851278 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 05:28:50.854047 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 05:28:50.876823 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 05:28:50.883876 kernel: usbcore: registered new interface driver usbhid Dec 13 05:28:50.883959 kernel: usbhid: USB HID core driver Dec 13 05:28:50.892408 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 05:28:50.892507 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 05:28:51.672104 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:28:51.673881 disk-uuid[564]: The operation has completed successfully. Dec 13 05:28:51.729178 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 05:28:51.729350 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 05:28:51.749090 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 05:28:51.754782 sh[586]: Success Dec 13 05:28:51.772852 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 05:28:51.853506 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 05:28:51.855999 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 05:28:51.858437 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 05:28:51.896058 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 05:28:51.896149 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:28:51.898177 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 05:28:51.901603 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 05:28:51.901644 kernel: BTRFS info (device dm-0): using free space tree Dec 13 05:28:51.911657 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 05:28:51.913258 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 05:28:51.919038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 05:28:51.924317 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 05:28:51.942887 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:28:51.942950 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:28:51.942971 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:28:51.948854 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:28:51.960907 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 05:28:51.963553 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:28:51.970443 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 05:28:51.979051 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 05:28:52.063464 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 05:28:52.074108 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 05:28:52.118760 systemd-networkd[768]: lo: Link UP Dec 13 05:28:52.118780 systemd-networkd[768]: lo: Gained carrier Dec 13 05:28:52.121361 systemd-networkd[768]: Enumeration completed Dec 13 05:28:52.121974 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:28:52.121981 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 05:28:52.124347 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 05:28:52.124958 systemd-networkd[768]: eth0: Link UP Dec 13 05:28:52.124965 systemd-networkd[768]: eth0: Gained carrier Dec 13 05:28:52.124982 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:28:52.126389 systemd[1]: Reached target network.target - Network. Dec 13 05:28:52.146670 ignition[677]: Ignition 2.19.0 Dec 13 05:28:52.146698 ignition[677]: Stage: fetch-offline Dec 13 05:28:52.146806 ignition[677]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:28:52.148732 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 05:28:52.146868 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:28:52.150891 systemd-networkd[768]: eth0: DHCPv4 address 10.244.18.182/30, gateway 10.244.18.181 acquired from 10.244.18.181 Dec 13 05:28:52.147071 ignition[677]: parsed url from cmdline: "" Dec 13 05:28:52.147078 ignition[677]: no config URL provided Dec 13 05:28:52.147088 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 05:28:52.147104 ignition[677]: no config at "/usr/lib/ignition/user.ign" Dec 13 05:28:52.158102 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 05:28:52.147113 ignition[677]: failed to fetch config: resource requires networking Dec 13 05:28:52.147410 ignition[677]: Ignition finished successfully Dec 13 05:28:52.195330 ignition[777]: Ignition 2.19.0 Dec 13 05:28:52.195355 ignition[777]: Stage: fetch Dec 13 05:28:52.195655 ignition[777]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:28:52.195676 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:28:52.195842 ignition[777]: parsed url from cmdline: "" Dec 13 05:28:52.195849 ignition[777]: no config URL provided Dec 13 05:28:52.195860 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 05:28:52.195877 ignition[777]: no config at "/usr/lib/ignition/user.ign" Dec 13 05:28:52.196100 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 05:28:52.196137 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 05:28:52.196159 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 05:28:52.215318 ignition[777]: GET result: OK Dec 13 05:28:52.216235 ignition[777]: parsing config with SHA512: 163e02d58b6c1f93d66cd33ca460e9c24c493e6f83c601349f52a7dbcf432bd657aca24bbb8f9817da7eafe815208a960d613da9938a9db95568916cecb06be7 Dec 13 05:28:52.222600 unknown[777]: fetched base config from "system" Dec 13 05:28:52.222619 unknown[777]: fetched base config from "system" Dec 13 05:28:52.223384 ignition[777]: fetch: fetch complete Dec 13 05:28:52.222629 unknown[777]: fetched user config from "openstack" Dec 13 05:28:52.223394 ignition[777]: fetch: fetch passed Dec 13 05:28:52.225504 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 05:28:52.223472 ignition[777]: Ignition finished successfully Dec 13 05:28:52.234117 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 05:28:52.256001 ignition[783]: Ignition 2.19.0 Dec 13 05:28:52.256024 ignition[783]: Stage: kargs Dec 13 05:28:52.256275 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:28:52.256294 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:28:52.259183 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 05:28:52.257876 ignition[783]: kargs: kargs passed Dec 13 05:28:52.257958 ignition[783]: Ignition finished successfully Dec 13 05:28:52.267061 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 05:28:52.287638 ignition[789]: Ignition 2.19.0 Dec 13 05:28:52.287662 ignition[789]: Stage: disks Dec 13 05:28:52.287970 ignition[789]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:28:52.291430 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 05:28:52.287991 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:28:52.289043 ignition[789]: disks: disks passed Dec 13 05:28:52.293104 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 05:28:52.289122 ignition[789]: Ignition finished successfully Dec 13 05:28:52.294682 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 05:28:52.296156 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 05:28:52.297757 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 05:28:52.299180 systemd[1]: Reached target basic.target - Basic System. Dec 13 05:28:52.314094 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 05:28:52.334135 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 05:28:52.338083 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 05:28:52.344924 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 05:28:52.464834 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 05:28:52.465896 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 05:28:52.468183 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 05:28:52.477021 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 05:28:52.481939 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 05:28:52.483652 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 05:28:52.487059 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 05:28:52.491165 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 05:28:52.491208 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 05:28:52.504854 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (805) Dec 13 05:28:52.496647 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 05:28:52.521557 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:28:52.521613 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:28:52.521659 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:28:52.521680 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:28:52.504012 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 05:28:52.520005 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 05:28:52.605272 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 05:28:52.613750 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Dec 13 05:28:52.621844 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 05:28:52.631581 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 05:28:52.743920 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 05:28:52.749974 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 05:28:52.752023 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 05:28:52.768899 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:28:52.793285 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 05:28:52.808100 ignition[923]: INFO : Ignition 2.19.0 Dec 13 05:28:52.808100 ignition[923]: INFO : Stage: mount Dec 13 05:28:52.810148 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:28:52.810148 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:28:52.812025 ignition[923]: INFO : mount: mount passed Dec 13 05:28:52.812025 ignition[923]: INFO : Ignition finished successfully Dec 13 05:28:52.811933 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 05:28:52.894638 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 05:28:53.429085 systemd-networkd[768]: eth0: Gained IPv6LL Dec 13 05:28:54.938396 systemd-networkd[768]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:4ad:24:19ff:fef4:12b6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:4ad:24:19ff:fef4:12b6/64 assigned by NDisc. Dec 13 05:28:54.938416 systemd-networkd[768]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 05:28:59.675033 coreos-metadata[807]: Dec 13 05:28:59.674 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:28:59.700165 coreos-metadata[807]: Dec 13 05:28:59.700 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 05:28:59.717649 coreos-metadata[807]: Dec 13 05:28:59.717 INFO Fetch successful Dec 13 05:28:59.719101 coreos-metadata[807]: Dec 13 05:28:59.718 INFO wrote hostname srv-zt5iu.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 05:28:59.720651 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 05:28:59.720870 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 05:28:59.727926 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 05:28:59.760425 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 05:28:59.789844 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (940) Dec 13 05:28:59.796815 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:28:59.796915 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:28:59.796946 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:28:59.801845 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:28:59.806148 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 05:28:59.847862 ignition[958]: INFO : Ignition 2.19.0 Dec 13 05:28:59.847862 ignition[958]: INFO : Stage: files Dec 13 05:28:59.847862 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:28:59.850618 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:28:59.850618 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Dec 13 05:28:59.852626 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 05:28:59.852626 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 05:28:59.854805 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 05:28:59.855828 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 05:28:59.855828 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 05:28:59.855586 unknown[958]: wrote ssh authorized keys file for user: core Dec 13 05:28:59.859394 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 05:28:59.859394 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 05:29:00.043453 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 05:29:00.356075 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 05:29:00.356075 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 05:29:00.359066 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 05:29:00.359066 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 05:29:00.359066 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 05:29:00.359066 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 05:29:00.359066 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 05:29:00.359066 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 05:29:00.359066 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 05:29:00.359066 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 05:29:00.359066 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 05:29:00.359066 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 05:29:00.359066 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 05:29:00.359066 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 05:29:00.359066 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 05:29:00.928853 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 05:29:02.608269 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 05:29:02.608269 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 05:29:02.616374 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 05:29:02.622077 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 05:29:02.622077 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 05:29:02.622077 ignition[958]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 05:29:02.622077 ignition[958]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 05:29:02.622077 ignition[958]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 05:29:02.622077 ignition[958]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 05:29:02.622077 ignition[958]: INFO : files: files passed Dec 13 05:29:02.622077 ignition[958]: INFO : Ignition finished successfully Dec 13 05:29:02.622994 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 05:29:02.637200 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 05:29:02.643137 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 05:29:02.661042 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 05:29:02.661271 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 05:29:02.677354 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:29:02.679033 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:29:02.680663 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:29:02.682107 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 05:29:02.683962 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 05:29:02.694119 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 05:29:02.743605 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 05:29:02.743813 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 05:29:02.745939 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 05:29:02.747317 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 05:29:02.749032 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 05:29:02.757164 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 05:29:02.777139 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 05:29:02.786098 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 05:29:02.799830 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:29:02.801748 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:29:02.803766 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 05:29:02.804551 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 05:29:02.804767 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 05:29:02.806958 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 05:29:02.808011 systemd[1]: Stopped target basic.target - Basic System. Dec 13 05:29:02.809522 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 05:29:02.810933 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 05:29:02.812436 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 05:29:02.814087 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 05:29:02.815700 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 05:29:02.817346 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 05:29:02.818906 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 05:29:02.820539 systemd[1]: Stopped target swap.target - Swaps. Dec 13 05:29:02.821926 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 05:29:02.822134 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 05:29:02.823978 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:29:02.825000 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:29:02.826432 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 05:29:02.826625 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:29:02.828077 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 05:29:02.828255 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 05:29:02.829880 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 05:29:02.830048 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 05:29:02.830982 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 05:29:02.831140 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 05:29:02.840132 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 05:29:02.844094 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 05:29:02.845587 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 05:29:02.846755 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:29:02.850398 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 05:29:02.852624 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 05:29:02.863204 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 05:29:02.864208 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 05:29:02.868827 ignition[1011]: INFO : Ignition 2.19.0 Dec 13 05:29:02.868827 ignition[1011]: INFO : Stage: umount Dec 13 05:29:02.868827 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:29:02.868827 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:29:02.868827 ignition[1011]: INFO : umount: umount passed Dec 13 05:29:02.868827 ignition[1011]: INFO : Ignition finished successfully Dec 13 05:29:02.875842 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 05:29:02.876043 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 05:29:02.877551 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 05:29:02.877705 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 05:29:02.879969 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 05:29:02.880050 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 05:29:02.881955 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 05:29:02.882021 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 05:29:02.882770 systemd[1]: Stopped target network.target - Network. Dec 13 05:29:02.883423 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 05:29:02.883492 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 05:29:02.885051 systemd[1]: Stopped target paths.target - Path Units. Dec 13 05:29:02.886299 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 05:29:02.889918 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:29:02.891022 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 05:29:02.892399 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 05:29:02.893975 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 05:29:02.894047 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 05:29:02.895408 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 05:29:02.895474 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 05:29:02.896785 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 05:29:02.896883 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 05:29:02.898366 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 05:29:02.898433 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 05:29:02.900310 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 05:29:02.903265 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 05:29:02.906694 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 05:29:02.907049 systemd-networkd[768]: eth0: DHCPv6 lease lost Dec 13 05:29:02.909404 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 05:29:02.909543 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 05:29:02.912957 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 05:29:02.913119 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 05:29:02.915715 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 05:29:02.915924 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 05:29:02.921445 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 05:29:02.922015 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:29:02.923239 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 05:29:02.923319 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 05:29:02.929981 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 05:29:02.932224 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 05:29:02.932320 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 05:29:02.934252 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 05:29:02.934327 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:29:02.935972 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 05:29:02.936047 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 05:29:02.937380 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 05:29:02.937450 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:29:02.942056 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:29:02.951268 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 05:29:02.951491 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:29:02.955944 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 05:29:02.956125 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 05:29:02.957561 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 05:29:02.957656 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 05:29:02.958885 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 05:29:02.958945 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:29:02.960231 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 05:29:02.960301 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 05:29:02.962485 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 05:29:02.962551 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 05:29:02.964052 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 05:29:02.964124 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:29:02.977027 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 05:29:02.980199 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 05:29:02.980301 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:29:02.984014 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 05:29:02.984116 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 05:29:02.984919 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 05:29:02.984990 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:29:02.985781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 05:29:02.985871 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:29:02.988413 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 05:29:02.988563 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 05:29:02.991602 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 05:29:02.999208 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 05:29:03.014314 systemd[1]: Switching root. Dec 13 05:29:03.048582 systemd-journald[201]: Journal stopped Dec 13 05:29:04.542358 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 13 05:29:04.542451 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 05:29:04.542476 kernel: SELinux: policy capability open_perms=1 Dec 13 05:29:04.542511 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 05:29:04.542537 kernel: SELinux: policy capability always_check_network=0 Dec 13 05:29:04.542556 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 05:29:04.542580 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 05:29:04.542606 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 05:29:04.542638 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 05:29:04.542657 kernel: audit: type=1403 audit(1734067743.280:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 05:29:04.542688 systemd[1]: Successfully loaded SELinux policy in 54.993ms. Dec 13 05:29:04.542717 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.220ms. Dec 13 05:29:04.542752 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 05:29:04.542774 systemd[1]: Detected virtualization kvm. Dec 13 05:29:04.544869 systemd[1]: Detected architecture x86-64. Dec 13 05:29:04.544906 systemd[1]: Detected first boot. Dec 13 05:29:04.544928 systemd[1]: Hostname set to . Dec 13 05:29:04.544948 systemd[1]: Initializing machine ID from VM UUID. Dec 13 05:29:04.544968 zram_generator::config[1054]: No configuration found. Dec 13 05:29:04.544998 systemd[1]: Populated /etc with preset unit settings. Dec 13 05:29:04.545039 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 05:29:04.545060 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 05:29:04.545081 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 05:29:04.545102 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 05:29:04.545122 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 05:29:04.545142 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 05:29:04.545161 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 05:29:04.545181 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 05:29:04.545213 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 05:29:04.545235 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 05:29:04.545255 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 05:29:04.545275 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:29:04.545295 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:29:04.545315 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 05:29:04.545335 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 05:29:04.545355 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 05:29:04.545375 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 05:29:04.545407 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 05:29:04.545428 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:29:04.545448 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 05:29:04.545469 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 05:29:04.545488 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 05:29:04.545508 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 05:29:04.545541 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:29:04.545563 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 05:29:04.545583 systemd[1]: Reached target slices.target - Slice Units. Dec 13 05:29:04.545604 systemd[1]: Reached target swap.target - Swaps. Dec 13 05:29:04.545635 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 05:29:04.545656 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 05:29:04.545677 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:29:04.545711 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 05:29:04.545751 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:29:04.545773 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 05:29:04.547853 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 05:29:04.547916 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 05:29:04.547939 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 05:29:04.547959 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:29:04.547980 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 05:29:04.548016 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 05:29:04.548038 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 05:29:04.548060 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 05:29:04.548080 systemd[1]: Reached target machines.target - Containers. Dec 13 05:29:04.548101 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 05:29:04.548121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:29:04.548141 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 05:29:04.548162 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 05:29:04.548194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 05:29:04.548228 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 05:29:04.548250 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 05:29:04.548270 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 05:29:04.548291 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 05:29:04.548311 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 05:29:04.548331 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 05:29:04.548351 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 05:29:04.548371 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 05:29:04.548403 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 05:29:04.548424 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 05:29:04.548444 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 05:29:04.548464 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 05:29:04.548485 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 05:29:04.548504 kernel: fuse: init (API version 7.39) Dec 13 05:29:04.548525 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 05:29:04.548546 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 05:29:04.548567 systemd[1]: Stopped verity-setup.service. Dec 13 05:29:04.548600 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:29:04.548633 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 05:29:04.548655 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 05:29:04.548676 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 05:29:04.548696 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 05:29:04.548730 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 05:29:04.548752 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 05:29:04.548773 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:29:04.552896 kernel: loop: module loaded Dec 13 05:29:04.552993 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 05:29:04.553040 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 05:29:04.553074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 05:29:04.553098 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 05:29:04.553139 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 05:29:04.553162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 05:29:04.553183 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 05:29:04.553204 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 05:29:04.553225 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 05:29:04.553245 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 05:29:04.553278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 05:29:04.553302 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 05:29:04.553385 systemd-journald[1147]: Collecting audit messages is disabled. Dec 13 05:29:04.553434 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 05:29:04.553459 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 05:29:04.553487 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 05:29:04.553521 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 05:29:04.553544 systemd-journald[1147]: Journal started Dec 13 05:29:04.553578 systemd-journald[1147]: Runtime Journal (/run/log/journal/9ba954ff02a9430e92cad752253ff53b) is 4.7M, max 38.0M, 33.2M free. Dec 13 05:29:04.088332 systemd[1]: Queued start job for default target multi-user.target. Dec 13 05:29:04.112189 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 05:29:04.113034 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 05:29:04.566818 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 05:29:04.566901 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 05:29:04.571652 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 05:29:04.590465 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 05:29:04.590555 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 05:29:04.593813 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 05:29:04.593887 kernel: ACPI: bus type drm_connector registered Dec 13 05:29:04.597816 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:29:04.616856 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 05:29:04.621819 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 05:29:04.634872 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 05:29:04.637832 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 05:29:04.651581 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 05:29:04.663052 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 05:29:04.675991 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 05:29:04.684853 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 05:29:04.695058 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 05:29:04.695311 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 05:29:04.696491 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 05:29:04.698016 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 05:29:04.702242 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 05:29:04.711173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:29:04.736881 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 05:29:04.763298 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 05:29:04.769910 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 05:29:04.779140 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 05:29:04.785049 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 05:29:04.810723 systemd-journald[1147]: Time spent on flushing to /var/log/journal/9ba954ff02a9430e92cad752253ff53b is 127.543ms for 1145 entries. Dec 13 05:29:04.810723 systemd-journald[1147]: System Journal (/var/log/journal/9ba954ff02a9430e92cad752253ff53b) is 8.0M, max 584.8M, 576.8M free. Dec 13 05:29:04.987085 systemd-journald[1147]: Received client request to flush runtime journal. Dec 13 05:29:04.987185 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 05:29:04.987230 kernel: loop1: detected capacity change from 0 to 8 Dec 13 05:29:04.987274 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 05:29:04.863425 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:29:04.870107 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 05:29:04.899084 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Dec 13 05:29:04.899105 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Dec 13 05:29:04.934653 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 05:29:04.937048 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 05:29:04.955849 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 05:29:04.971335 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 05:29:04.975303 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 05:29:04.990413 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 05:29:05.008828 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 05:29:05.078834 kernel: loop4: detected capacity change from 0 to 211296 Dec 13 05:29:05.106657 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 05:29:05.114829 kernel: loop5: detected capacity change from 0 to 8 Dec 13 05:29:05.123832 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 05:29:05.121923 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 05:29:05.155847 kernel: loop7: detected capacity change from 0 to 140768 Dec 13 05:29:05.176459 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Dec 13 05:29:05.177085 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Dec 13 05:29:05.194300 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:29:05.197549 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 05:29:05.201023 (sd-merge)[1211]: Merged extensions into '/usr'. Dec 13 05:29:05.211872 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 05:29:05.211904 systemd[1]: Reloading... Dec 13 05:29:05.354122 zram_generator::config[1241]: No configuration found. Dec 13 05:29:05.611866 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 05:29:05.616659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:29:05.683571 systemd[1]: Reloading finished in 470 ms. Dec 13 05:29:05.744294 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 05:29:05.747111 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 05:29:05.759029 systemd[1]: Starting ensure-sysext.service... Dec 13 05:29:05.776946 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 05:29:05.794146 systemd[1]: Reloading requested from client PID 1297 ('systemctl') (unit ensure-sysext.service)... Dec 13 05:29:05.794185 systemd[1]: Reloading... Dec 13 05:29:05.860734 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 05:29:05.864400 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 05:29:05.868634 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 05:29:05.871288 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Dec 13 05:29:05.871561 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Dec 13 05:29:05.887070 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 05:29:05.887092 systemd-tmpfiles[1298]: Skipping /boot Dec 13 05:29:05.913872 zram_generator::config[1324]: No configuration found. Dec 13 05:29:05.926512 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 05:29:05.926743 systemd-tmpfiles[1298]: Skipping /boot Dec 13 05:29:06.105684 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:29:06.174821 systemd[1]: Reloading finished in 379 ms. Dec 13 05:29:06.198103 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 05:29:06.207696 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:29:06.221082 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 05:29:06.226074 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 05:29:06.235119 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 05:29:06.244012 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 05:29:06.249527 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:29:06.264508 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 05:29:06.279057 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:29:06.279376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:29:06.288297 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 05:29:06.297242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 05:29:06.306414 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 05:29:06.309038 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:29:06.309261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:29:06.315596 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:29:06.315938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:29:06.316205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:29:06.329263 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 05:29:06.330767 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Dec 13 05:29:06.330786 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:29:06.339248 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 05:29:06.341642 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 05:29:06.348676 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:29:06.349997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:29:06.364234 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 05:29:06.381866 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 05:29:06.385030 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:29:06.385255 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:29:06.390029 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 05:29:06.393863 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 05:29:06.399993 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 05:29:06.402681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 05:29:06.402966 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 05:29:06.405309 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:29:06.407498 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 05:29:06.411260 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 05:29:06.411502 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 05:29:06.414658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 05:29:06.415133 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 05:29:06.423714 systemd[1]: Finished ensure-sysext.service. Dec 13 05:29:06.460363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 05:29:06.461267 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 05:29:06.461403 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 05:29:06.472114 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 05:29:06.484432 augenrules[1433]: No rules Dec 13 05:29:06.490085 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 05:29:06.490980 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 05:29:06.491682 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 05:29:06.497626 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 05:29:06.542056 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 05:29:06.550420 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 05:29:06.558831 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 05:29:06.679863 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1413) Dec 13 05:29:06.685875 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1413) Dec 13 05:29:06.696832 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1415) Dec 13 05:29:06.698768 systemd-networkd[1429]: lo: Link UP Dec 13 05:29:06.699318 systemd-networkd[1429]: lo: Gained carrier Dec 13 05:29:06.702522 systemd-networkd[1429]: Enumeration completed Dec 13 05:29:06.702890 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 05:29:06.704933 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:29:06.705068 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 05:29:06.706826 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:29:06.707012 systemd-networkd[1429]: eth0: Link UP Dec 13 05:29:06.707119 systemd-networkd[1429]: eth0: Gained carrier Dec 13 05:29:06.707235 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:29:06.713078 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 05:29:06.725979 systemd-networkd[1429]: eth0: DHCPv4 address 10.244.18.182/30, gateway 10.244.18.181 acquired from 10.244.18.181 Dec 13 05:29:06.766036 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 05:29:06.768324 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 05:29:06.779023 systemd-resolved[1388]: Positive Trust Anchors: Dec 13 05:29:06.779615 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 05:29:06.779667 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 05:29:06.804223 systemd-resolved[1388]: Using system hostname 'srv-zt5iu.gb1.brightbox.com'. Dec 13 05:29:06.808609 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 05:29:06.810189 systemd[1]: Reached target network.target - Network. Dec 13 05:29:06.820129 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:29:06.825353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 05:29:06.850324 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 05:29:07.796177 systemd-resolved[1388]: Clock change detected. Flushing caches. Dec 13 05:29:07.796396 systemd-timesyncd[1432]: Contacted time server 212.69.41.125:123 (0.flatcar.pool.ntp.org). Dec 13 05:29:07.796811 systemd-timesyncd[1432]: Initial clock synchronization to Fri 2024-12-13 05:29:07.796055 UTC. Dec 13 05:29:07.799568 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 05:29:07.830087 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 05:29:07.843677 kernel: ACPI: button: Power Button [PWRF] Dec 13 05:29:07.848658 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 05:29:07.881668 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 05:29:07.890933 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 05:29:07.891226 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 05:29:07.926660 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 05:29:07.959077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:29:08.152682 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:29:08.192675 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 05:29:08.198927 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 05:29:08.221274 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 05:29:08.258848 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 05:29:08.260146 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:29:08.260955 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 05:29:08.262123 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 05:29:08.262997 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 05:29:08.264205 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 05:29:08.265107 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 05:29:08.265931 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 05:29:08.266743 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 05:29:08.266796 systemd[1]: Reached target paths.target - Path Units. Dec 13 05:29:08.267457 systemd[1]: Reached target timers.target - Timer Units. Dec 13 05:29:08.270790 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 05:29:08.274085 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 05:29:08.282108 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 05:29:08.285546 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 05:29:08.287244 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 05:29:08.288151 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 05:29:08.288894 systemd[1]: Reached target basic.target - Basic System. Dec 13 05:29:08.289668 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 05:29:08.289724 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 05:29:08.296970 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 05:29:08.303921 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 05:29:08.308333 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 05:29:08.313998 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 05:29:08.327489 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 05:29:08.335877 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 05:29:08.336777 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 05:29:08.345825 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 05:29:08.358881 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 05:29:08.365649 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 05:29:08.374848 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 05:29:08.377560 jq[1480]: false Dec 13 05:29:08.383899 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 05:29:08.385601 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 05:29:08.386455 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 05:29:08.396077 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 05:29:08.403805 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 05:29:08.413368 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 05:29:08.431474 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 05:29:08.432868 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 05:29:08.433485 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 05:29:08.433783 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 05:29:08.442300 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 05:29:08.442593 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 05:29:08.446858 extend-filesystems[1482]: Found loop4 Dec 13 05:29:08.446858 extend-filesystems[1482]: Found loop5 Dec 13 05:29:08.446858 extend-filesystems[1482]: Found loop6 Dec 13 05:29:08.446858 extend-filesystems[1482]: Found loop7 Dec 13 05:29:08.446858 extend-filesystems[1482]: Found vda Dec 13 05:29:08.446858 extend-filesystems[1482]: Found vda1 Dec 13 05:29:08.446858 extend-filesystems[1482]: Found vda2 Dec 13 05:29:08.446858 extend-filesystems[1482]: Found vda3 Dec 13 05:29:08.446858 extend-filesystems[1482]: Found usr Dec 13 05:29:08.446858 extend-filesystems[1482]: Found vda4 Dec 13 05:29:08.446858 extend-filesystems[1482]: Found vda6 Dec 13 05:29:08.446858 extend-filesystems[1482]: Found vda7 Dec 13 05:29:08.446858 extend-filesystems[1482]: Found vda9 Dec 13 05:29:08.446858 extend-filesystems[1482]: Checking size of /dev/vda9 Dec 13 05:29:08.543730 jq[1496]: true Dec 13 05:29:08.543920 update_engine[1491]: I20241213 05:29:08.484383 1491 main.cc:92] Flatcar Update Engine starting Dec 13 05:29:08.543920 update_engine[1491]: I20241213 05:29:08.494578 1491 update_check_scheduler.cc:74] Next update check in 3m46s Dec 13 05:29:08.488975 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 05:29:08.486701 dbus-daemon[1479]: [system] SELinux support is enabled Dec 13 05:29:08.550651 extend-filesystems[1482]: Resized partition /dev/vda9 Dec 13 05:29:08.524155 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 05:29:08.492806 dbus-daemon[1479]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1429 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 05:29:08.529228 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 05:29:08.527000 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 05:29:08.555521 jq[1509]: true Dec 13 05:29:08.529288 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 05:29:08.535219 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 05:29:08.535266 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 05:29:08.550328 systemd[1]: Started update-engine.service - Update Engine. Dec 13 05:29:08.556955 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 05:29:08.561649 extend-filesystems[1519]: resize2fs 1.47.1 (20-May-2024) Dec 13 05:29:08.575922 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 05:29:08.589657 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 05:29:08.589933 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 05:29:08.593984 tar[1499]: linux-amd64/helm Dec 13 05:29:08.652663 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1414) Dec 13 05:29:08.703216 systemd-logind[1489]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 05:29:08.703275 systemd-logind[1489]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 05:29:08.706146 systemd-logind[1489]: New seat seat0. Dec 13 05:29:08.708782 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 05:29:08.816746 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Dec 13 05:29:08.815851 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 05:29:08.831208 systemd[1]: Starting sshkeys.service... Dec 13 05:29:08.885592 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 05:29:08.915420 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 05:29:08.931851 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 05:29:08.995008 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 05:29:09.009515 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 05:29:09.022245 extend-filesystems[1519]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 05:29:09.022245 extend-filesystems[1519]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 05:29:09.022245 extend-filesystems[1519]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 05:29:09.031011 extend-filesystems[1482]: Resized filesystem in /dev/vda9 Dec 13 05:29:09.023481 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 05:29:09.023798 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 05:29:09.056865 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 05:29:09.058638 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 05:29:09.059005 dbus-daemon[1479]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1522 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 05:29:09.062645 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 05:29:09.074338 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 05:29:09.081125 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 05:29:09.094463 systemd[1]: Started sshd@0-10.244.18.182:22-147.75.109.163:47930.service - OpenSSH per-connection server daemon (147.75.109.163:47930). Dec 13 05:29:09.115057 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 05:29:09.115955 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 05:29:09.137982 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 05:29:09.173686 polkitd[1564]: Started polkitd version 121 Dec 13 05:29:09.191993 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 05:29:09.196241 systemd-networkd[1429]: eth0: Gained IPv6LL Dec 13 05:29:09.208303 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 05:29:09.215436 polkitd[1564]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 05:29:09.215555 polkitd[1564]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 05:29:09.218199 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 05:29:09.219588 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 05:29:09.221522 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 05:29:09.226998 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 05:29:09.233559 polkitd[1564]: Finished loading, compiling and executing 2 rules Dec 13 05:29:09.234516 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 05:29:09.235809 polkitd[1564]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 05:29:09.240210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:29:09.249248 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 05:29:09.251491 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 05:29:09.277472 systemd-hostnamed[1522]: Hostname set to (static) Dec 13 05:29:09.296614 containerd[1511]: time="2024-12-13T05:29:09.290494420Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 05:29:09.361660 containerd[1511]: time="2024-12-13T05:29:09.357474699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:29:09.368460 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 05:29:09.369680 containerd[1511]: time="2024-12-13T05:29:09.369207764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:29:09.369680 containerd[1511]: time="2024-12-13T05:29:09.369267478Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 05:29:09.369680 containerd[1511]: time="2024-12-13T05:29:09.369299193Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 05:29:09.369680 containerd[1511]: time="2024-12-13T05:29:09.369583784Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 05:29:09.370332 containerd[1511]: time="2024-12-13T05:29:09.369620927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 05:29:09.370332 containerd[1511]: time="2024-12-13T05:29:09.370237327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:29:09.370332 containerd[1511]: time="2024-12-13T05:29:09.370261682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:29:09.374056 containerd[1511]: time="2024-12-13T05:29:09.373873654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:29:09.374056 containerd[1511]: time="2024-12-13T05:29:09.373913004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 05:29:09.374056 containerd[1511]: time="2024-12-13T05:29:09.373939901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:29:09.374056 containerd[1511]: time="2024-12-13T05:29:09.373956540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 05:29:09.374617 containerd[1511]: time="2024-12-13T05:29:09.374110180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:29:09.374617 containerd[1511]: time="2024-12-13T05:29:09.374533692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:29:09.375241 containerd[1511]: time="2024-12-13T05:29:09.375139422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:29:09.375241 containerd[1511]: time="2024-12-13T05:29:09.375173864Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 05:29:09.375578 containerd[1511]: time="2024-12-13T05:29:09.375301764Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 05:29:09.376497 containerd[1511]: time="2024-12-13T05:29:09.376190808Z" level=info msg="metadata content store policy set" policy=shared Dec 13 05:29:09.387582 containerd[1511]: time="2024-12-13T05:29:09.387519043Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 05:29:09.387740 containerd[1511]: time="2024-12-13T05:29:09.387646701Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 05:29:09.387740 containerd[1511]: time="2024-12-13T05:29:09.387680153Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 05:29:09.387740 containerd[1511]: time="2024-12-13T05:29:09.387705162Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 05:29:09.387906 containerd[1511]: time="2024-12-13T05:29:09.387740506Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 05:29:09.389445 containerd[1511]: time="2024-12-13T05:29:09.388961923Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 05:29:09.389445 containerd[1511]: time="2024-12-13T05:29:09.389403704Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 05:29:09.389761 containerd[1511]: time="2024-12-13T05:29:09.389592547Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 05:29:09.389761 containerd[1511]: time="2024-12-13T05:29:09.389619443Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 05:29:09.389761 containerd[1511]: time="2024-12-13T05:29:09.389667423Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 05:29:09.389761 containerd[1511]: time="2024-12-13T05:29:09.389690504Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 05:29:09.389761 containerd[1511]: time="2024-12-13T05:29:09.389720900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 05:29:09.389761 containerd[1511]: time="2024-12-13T05:29:09.389746779Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.389769744Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.389792589Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.389813535Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.389832747Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.389860590Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.389893817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.389915608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.389951337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.389976775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.389997028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.390017175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.390042131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.390064008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391079 containerd[1511]: time="2024-12-13T05:29:09.390086528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391610 containerd[1511]: time="2024-12-13T05:29:09.390110531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391610 containerd[1511]: time="2024-12-13T05:29:09.390133096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391610 containerd[1511]: time="2024-12-13T05:29:09.390153143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391610 containerd[1511]: time="2024-12-13T05:29:09.390174140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391610 containerd[1511]: time="2024-12-13T05:29:09.390199244Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 05:29:09.391610 containerd[1511]: time="2024-12-13T05:29:09.390240810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391610 containerd[1511]: time="2024-12-13T05:29:09.390262935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.391610 containerd[1511]: time="2024-12-13T05:29:09.390280453Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 05:29:09.391610 containerd[1511]: time="2024-12-13T05:29:09.390363063Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 05:29:09.391610 containerd[1511]: time="2024-12-13T05:29:09.390411302Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 05:29:09.391610 containerd[1511]: time="2024-12-13T05:29:09.390432804Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 05:29:09.391610 containerd[1511]: time="2024-12-13T05:29:09.390451592Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 05:29:09.391610 containerd[1511]: time="2024-12-13T05:29:09.390467529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.393496 containerd[1511]: time="2024-12-13T05:29:09.390522301Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 05:29:09.393496 containerd[1511]: time="2024-12-13T05:29:09.390551455Z" level=info msg="NRI interface is disabled by configuration." Dec 13 05:29:09.393496 containerd[1511]: time="2024-12-13T05:29:09.390582523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 05:29:09.395064 containerd[1511]: time="2024-12-13T05:29:09.394933941Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 05:29:09.395064 containerd[1511]: time="2024-12-13T05:29:09.395043894Z" level=info msg="Connect containerd service" Dec 13 05:29:09.395865 containerd[1511]: time="2024-12-13T05:29:09.395115786Z" level=info msg="using legacy CRI server" Dec 13 05:29:09.395865 containerd[1511]: time="2024-12-13T05:29:09.395132573Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 05:29:09.395865 containerd[1511]: time="2024-12-13T05:29:09.395348636Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 05:29:09.396675 containerd[1511]: time="2024-12-13T05:29:09.396394903Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 05:29:09.396926 containerd[1511]: time="2024-12-13T05:29:09.396858502Z" level=info msg="Start subscribing containerd event" Dec 13 05:29:09.397064 containerd[1511]: time="2024-12-13T05:29:09.397037890Z" level=info msg="Start recovering state" Dec 13 05:29:09.398117 containerd[1511]: time="2024-12-13T05:29:09.397274608Z" level=info msg="Start event monitor" Dec 13 05:29:09.398117 containerd[1511]: time="2024-12-13T05:29:09.397316846Z" level=info msg="Start snapshots syncer" Dec 13 05:29:09.398117 containerd[1511]: time="2024-12-13T05:29:09.397342517Z" level=info msg="Start cni network conf syncer for default" Dec 13 05:29:09.398117 containerd[1511]: time="2024-12-13T05:29:09.397357623Z" level=info msg="Start streaming server" Dec 13 05:29:09.399199 containerd[1511]: time="2024-12-13T05:29:09.399069158Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 05:29:09.399199 containerd[1511]: time="2024-12-13T05:29:09.399154661Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 05:29:09.400922 containerd[1511]: time="2024-12-13T05:29:09.399265158Z" level=info msg="containerd successfully booted in 0.110393s" Dec 13 05:29:09.399446 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 05:29:09.671673 tar[1499]: linux-amd64/LICENSE Dec 13 05:29:09.671673 tar[1499]: linux-amd64/README.md Dec 13 05:29:09.683460 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 05:29:10.032463 sshd[1565]: Accepted publickey for core from 147.75.109.163 port 47930 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:29:10.039768 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:29:10.063604 systemd-logind[1489]: New session 1 of user core. Dec 13 05:29:10.066395 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 05:29:10.078244 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 05:29:10.100246 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 05:29:10.113298 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 05:29:10.125509 (systemd)[1603]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 05:29:10.278949 systemd[1603]: Queued start job for default target default.target. Dec 13 05:29:10.290410 systemd[1603]: Created slice app.slice - User Application Slice. Dec 13 05:29:10.290724 systemd[1603]: Reached target paths.target - Paths. Dec 13 05:29:10.290900 systemd[1603]: Reached target timers.target - Timers. Dec 13 05:29:10.295897 systemd[1603]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 05:29:10.320815 systemd[1603]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 05:29:10.321030 systemd[1603]: Reached target sockets.target - Sockets. Dec 13 05:29:10.321058 systemd[1603]: Reached target basic.target - Basic System. Dec 13 05:29:10.321129 systemd[1603]: Reached target default.target - Main User Target. Dec 13 05:29:10.321198 systemd[1603]: Startup finished in 184ms. Dec 13 05:29:10.330860 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 05:29:10.344268 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 05:29:10.383434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:29:10.383658 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:29:10.651921 systemd-networkd[1429]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:4ad:24:19ff:fef4:12b6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:4ad:24:19ff:fef4:12b6/64 assigned by NDisc. Dec 13 05:29:10.651935 systemd-networkd[1429]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 05:29:10.985219 systemd[1]: Started sshd@1-10.244.18.182:22-147.75.109.163:43070.service - OpenSSH per-connection server daemon (147.75.109.163:43070). Dec 13 05:29:11.240787 kubelet[1617]: E1213 05:29:11.240545 1617 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:29:11.244724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:29:11.245004 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:29:11.246061 systemd[1]: kubelet.service: Consumed 1.102s CPU time. Dec 13 05:29:11.902693 sshd[1627]: Accepted publickey for core from 147.75.109.163 port 43070 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:29:11.904916 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:29:11.913718 systemd-logind[1489]: New session 2 of user core. Dec 13 05:29:11.928387 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 05:29:12.521451 sshd[1627]: pam_unix(sshd:session): session closed for user core Dec 13 05:29:12.525140 systemd[1]: sshd@1-10.244.18.182:22-147.75.109.163:43070.service: Deactivated successfully. Dec 13 05:29:12.528126 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 05:29:12.530307 systemd-logind[1489]: Session 2 logged out. Waiting for processes to exit. Dec 13 05:29:12.532132 systemd-logind[1489]: Removed session 2. Dec 13 05:29:12.680316 systemd[1]: Started sshd@2-10.244.18.182:22-147.75.109.163:43082.service - OpenSSH per-connection server daemon (147.75.109.163:43082). Dec 13 05:29:13.564154 sshd[1637]: Accepted publickey for core from 147.75.109.163 port 43082 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:29:13.566194 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:29:13.573083 systemd-logind[1489]: New session 3 of user core. Dec 13 05:29:13.580081 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 05:29:14.188535 sshd[1637]: pam_unix(sshd:session): session closed for user core Dec 13 05:29:14.192436 systemd[1]: sshd@2-10.244.18.182:22-147.75.109.163:43082.service: Deactivated successfully. Dec 13 05:29:14.194972 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 05:29:14.197143 systemd-logind[1489]: Session 3 logged out. Waiting for processes to exit. Dec 13 05:29:14.198896 systemd-logind[1489]: Removed session 3. Dec 13 05:29:14.297469 login[1575]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 05:29:14.309105 login[1577]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 05:29:14.314466 systemd-logind[1489]: New session 4 of user core. Dec 13 05:29:14.322036 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 05:29:14.326989 systemd-logind[1489]: New session 5 of user core. Dec 13 05:29:14.331935 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 05:29:15.428256 coreos-metadata[1478]: Dec 13 05:29:15.428 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:29:15.457899 coreos-metadata[1478]: Dec 13 05:29:15.457 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 05:29:15.465656 coreos-metadata[1478]: Dec 13 05:29:15.465 INFO Fetch failed with 404: resource not found Dec 13 05:29:15.465904 coreos-metadata[1478]: Dec 13 05:29:15.465 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 05:29:15.466580 coreos-metadata[1478]: Dec 13 05:29:15.466 INFO Fetch successful Dec 13 05:29:15.466817 coreos-metadata[1478]: Dec 13 05:29:15.466 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 05:29:15.480838 coreos-metadata[1478]: Dec 13 05:29:15.480 INFO Fetch successful Dec 13 05:29:15.481067 coreos-metadata[1478]: Dec 13 05:29:15.481 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 05:29:15.495674 coreos-metadata[1478]: Dec 13 05:29:15.495 INFO Fetch successful Dec 13 05:29:15.495839 coreos-metadata[1478]: Dec 13 05:29:15.495 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 05:29:15.509750 coreos-metadata[1478]: Dec 13 05:29:15.509 INFO Fetch successful Dec 13 05:29:15.509929 coreos-metadata[1478]: Dec 13 05:29:15.509 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 05:29:15.529798 coreos-metadata[1478]: Dec 13 05:29:15.529 INFO Fetch successful Dec 13 05:29:15.564935 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 05:29:15.566994 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 05:29:16.098119 coreos-metadata[1545]: Dec 13 05:29:16.097 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:29:16.121247 coreos-metadata[1545]: Dec 13 05:29:16.120 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 05:29:16.156250 coreos-metadata[1545]: Dec 13 05:29:16.156 INFO Fetch successful Dec 13 05:29:16.156809 coreos-metadata[1545]: Dec 13 05:29:16.156 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 05:29:16.199794 coreos-metadata[1545]: Dec 13 05:29:16.199 INFO Fetch successful Dec 13 05:29:16.202366 unknown[1545]: wrote ssh authorized keys file for user: core Dec 13 05:29:16.223303 update-ssh-keys[1679]: Updated "/home/core/.ssh/authorized_keys" Dec 13 05:29:16.224154 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 05:29:16.227816 systemd[1]: Finished sshkeys.service. Dec 13 05:29:16.229131 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 05:29:16.229355 systemd[1]: Startup finished in 1.517s (kernel) + 14.520s (initrd) + 12.082s (userspace) = 28.120s. Dec 13 05:29:21.405525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 05:29:21.419040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:29:21.588871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:29:21.601355 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:29:21.690902 kubelet[1690]: E1213 05:29:21.690472 1690 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:29:21.696037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:29:21.696315 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:29:24.356180 systemd[1]: Started sshd@3-10.244.18.182:22-147.75.109.163:41634.service - OpenSSH per-connection server daemon (147.75.109.163:41634). Dec 13 05:29:25.243015 sshd[1699]: Accepted publickey for core from 147.75.109.163 port 41634 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:29:25.245318 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:29:25.253446 systemd-logind[1489]: New session 6 of user core. Dec 13 05:29:25.261072 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 05:29:25.863928 sshd[1699]: pam_unix(sshd:session): session closed for user core Dec 13 05:29:25.867998 systemd-logind[1489]: Session 6 logged out. Waiting for processes to exit. Dec 13 05:29:25.868594 systemd[1]: sshd@3-10.244.18.182:22-147.75.109.163:41634.service: Deactivated successfully. Dec 13 05:29:25.870964 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 05:29:25.873136 systemd-logind[1489]: Removed session 6. Dec 13 05:29:26.021127 systemd[1]: Started sshd@4-10.244.18.182:22-147.75.109.163:41648.service - OpenSSH per-connection server daemon (147.75.109.163:41648). Dec 13 05:29:26.915068 sshd[1706]: Accepted publickey for core from 147.75.109.163 port 41648 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:29:26.917326 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:29:26.924004 systemd-logind[1489]: New session 7 of user core. Dec 13 05:29:26.936041 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 05:29:27.531026 sshd[1706]: pam_unix(sshd:session): session closed for user core Dec 13 05:29:27.536774 systemd[1]: sshd@4-10.244.18.182:22-147.75.109.163:41648.service: Deactivated successfully. Dec 13 05:29:27.538858 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 05:29:27.539736 systemd-logind[1489]: Session 7 logged out. Waiting for processes to exit. Dec 13 05:29:27.541453 systemd-logind[1489]: Removed session 7. Dec 13 05:29:27.688150 systemd[1]: Started sshd@5-10.244.18.182:22-147.75.109.163:59516.service - OpenSSH per-connection server daemon (147.75.109.163:59516). Dec 13 05:29:28.590040 sshd[1713]: Accepted publickey for core from 147.75.109.163 port 59516 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:29:28.592132 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:29:28.599899 systemd-logind[1489]: New session 8 of user core. Dec 13 05:29:28.609972 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 05:29:29.213432 sshd[1713]: pam_unix(sshd:session): session closed for user core Dec 13 05:29:29.218423 systemd-logind[1489]: Session 8 logged out. Waiting for processes to exit. Dec 13 05:29:29.219746 systemd[1]: sshd@5-10.244.18.182:22-147.75.109.163:59516.service: Deactivated successfully. Dec 13 05:29:29.222451 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 05:29:29.224198 systemd-logind[1489]: Removed session 8. Dec 13 05:29:29.369079 systemd[1]: Started sshd@6-10.244.18.182:22-147.75.109.163:59526.service - OpenSSH per-connection server daemon (147.75.109.163:59526). Dec 13 05:29:30.260110 sshd[1720]: Accepted publickey for core from 147.75.109.163 port 59526 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:29:30.262324 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:29:30.268470 systemd-logind[1489]: New session 9 of user core. Dec 13 05:29:30.280897 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 05:29:30.747905 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 05:29:30.748423 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:29:31.290168 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 05:29:31.292143 (dockerd)[1738]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 05:29:31.744581 dockerd[1738]: time="2024-12-13T05:29:31.744355724Z" level=info msg="Starting up" Dec 13 05:29:31.763542 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 05:29:31.777562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:29:32.014932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:29:32.018376 (kubelet)[1767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:29:32.065937 dockerd[1738]: time="2024-12-13T05:29:32.065288119Z" level=info msg="Loading containers: start." Dec 13 05:29:32.129777 kubelet[1767]: E1213 05:29:32.129677 1767 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:29:32.134656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:29:32.135156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:29:32.227873 kernel: Initializing XFRM netlink socket Dec 13 05:29:32.351842 systemd-networkd[1429]: docker0: Link UP Dec 13 05:29:32.377070 dockerd[1738]: time="2024-12-13T05:29:32.376862063Z" level=info msg="Loading containers: done." Dec 13 05:29:32.401250 dockerd[1738]: time="2024-12-13T05:29:32.400057861Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 05:29:32.401250 dockerd[1738]: time="2024-12-13T05:29:32.400249997Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 05:29:32.401250 dockerd[1738]: time="2024-12-13T05:29:32.400499729Z" level=info msg="Daemon has completed initialization" Dec 13 05:29:32.443368 dockerd[1738]: time="2024-12-13T05:29:32.442258957Z" level=info msg="API listen on /run/docker.sock" Dec 13 05:29:32.442706 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 05:29:33.861617 containerd[1511]: time="2024-12-13T05:29:33.860809570Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 05:29:34.714819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2243430325.mount: Deactivated successfully. Dec 13 05:29:37.049456 containerd[1511]: time="2024-12-13T05:29:37.049366114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:37.051056 containerd[1511]: time="2024-12-13T05:29:37.050648522Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Dec 13 05:29:37.051999 containerd[1511]: time="2024-12-13T05:29:37.051893113Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:37.057314 containerd[1511]: time="2024-12-13T05:29:37.057235783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:37.059160 containerd[1511]: time="2024-12-13T05:29:37.058812135Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.197883033s" Dec 13 05:29:37.059160 containerd[1511]: time="2024-12-13T05:29:37.058934476Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 05:29:37.088998 containerd[1511]: time="2024-12-13T05:29:37.088939860Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 05:29:40.106326 containerd[1511]: time="2024-12-13T05:29:40.106249711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:40.108018 containerd[1511]: time="2024-12-13T05:29:40.107886284Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Dec 13 05:29:40.108673 containerd[1511]: time="2024-12-13T05:29:40.108618834Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:40.115653 containerd[1511]: time="2024-12-13T05:29:40.113106750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:40.115653 containerd[1511]: time="2024-12-13T05:29:40.114606518Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 3.025609827s" Dec 13 05:29:40.115653 containerd[1511]: time="2024-12-13T05:29:40.115360583Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 05:29:40.148750 containerd[1511]: time="2024-12-13T05:29:40.148651517Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 05:29:40.682000 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 05:29:42.155712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 05:29:42.166590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:29:42.342437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:29:42.345298 (kubelet)[1988]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:29:42.427543 kubelet[1988]: E1213 05:29:42.426990 1988 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:29:42.431087 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:29:42.431332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:29:42.482670 containerd[1511]: time="2024-12-13T05:29:42.482581739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:42.484272 containerd[1511]: time="2024-12-13T05:29:42.484200058Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Dec 13 05:29:42.484729 containerd[1511]: time="2024-12-13T05:29:42.484668008Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:42.489272 containerd[1511]: time="2024-12-13T05:29:42.488460992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:42.490216 containerd[1511]: time="2024-12-13T05:29:42.490172576Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 2.341055062s" Dec 13 05:29:42.490309 containerd[1511]: time="2024-12-13T05:29:42.490220437Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 05:29:42.519435 containerd[1511]: time="2024-12-13T05:29:42.519369331Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 05:29:44.105670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3372867283.mount: Deactivated successfully. Dec 13 05:29:44.759387 containerd[1511]: time="2024-12-13T05:29:44.758182152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:44.760931 containerd[1511]: time="2024-12-13T05:29:44.760863556Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Dec 13 05:29:44.762211 containerd[1511]: time="2024-12-13T05:29:44.762154535Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:44.767665 containerd[1511]: time="2024-12-13T05:29:44.766151299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:44.769877 containerd[1511]: time="2024-12-13T05:29:44.769834038Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.250129706s" Dec 13 05:29:44.770046 containerd[1511]: time="2024-12-13T05:29:44.770009959Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 05:29:44.806295 containerd[1511]: time="2024-12-13T05:29:44.806226598Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 05:29:45.656923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount887573403.mount: Deactivated successfully. Dec 13 05:29:47.026676 containerd[1511]: time="2024-12-13T05:29:47.025287544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:47.028369 containerd[1511]: time="2024-12-13T05:29:47.028313550Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 05:29:47.029160 containerd[1511]: time="2024-12-13T05:29:47.029125413Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:47.034780 containerd[1511]: time="2024-12-13T05:29:47.034700567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:47.036540 containerd[1511]: time="2024-12-13T05:29:47.036494120Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.229918695s" Dec 13 05:29:47.036744 containerd[1511]: time="2024-12-13T05:29:47.036716250Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 05:29:47.072307 containerd[1511]: time="2024-12-13T05:29:47.072253080Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 05:29:47.676605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013927037.mount: Deactivated successfully. Dec 13 05:29:47.682293 containerd[1511]: time="2024-12-13T05:29:47.682242038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:47.683408 containerd[1511]: time="2024-12-13T05:29:47.683361520Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Dec 13 05:29:47.684184 containerd[1511]: time="2024-12-13T05:29:47.683754483Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:47.686716 containerd[1511]: time="2024-12-13T05:29:47.686680198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:47.687933 containerd[1511]: time="2024-12-13T05:29:47.687893787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 615.15676ms" Dec 13 05:29:47.688044 containerd[1511]: time="2024-12-13T05:29:47.687939281Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 05:29:47.719895 containerd[1511]: time="2024-12-13T05:29:47.719827788Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 05:29:48.404483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1113980330.mount: Deactivated successfully. Dec 13 05:29:52.655440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 05:29:52.661884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:29:52.928116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:29:52.935163 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:29:53.017493 kubelet[2117]: E1213 05:29:53.017394 2117 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:29:53.020020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:29:53.020384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:29:54.110155 update_engine[1491]: I20241213 05:29:54.109956 1491 update_attempter.cc:509] Updating boot flags... Dec 13 05:29:54.206741 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2136) Dec 13 05:29:54.427404 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2137) Dec 13 05:29:56.075525 containerd[1511]: time="2024-12-13T05:29:56.074773152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:56.078679 containerd[1511]: time="2024-12-13T05:29:56.078558554Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Dec 13 05:29:56.079552 containerd[1511]: time="2024-12-13T05:29:56.079492140Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:56.100122 containerd[1511]: time="2024-12-13T05:29:56.099989364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:29:56.103660 containerd[1511]: time="2024-12-13T05:29:56.103585492Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 8.383701549s" Dec 13 05:29:56.103743 containerd[1511]: time="2024-12-13T05:29:56.103668931Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 05:30:00.828144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:30:00.837042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:30:00.873696 systemd[1]: Reloading requested from client PID 2207 ('systemctl') (unit session-9.scope)... Dec 13 05:30:00.873743 systemd[1]: Reloading... Dec 13 05:30:01.012780 zram_generator::config[2252]: No configuration found. Dec 13 05:30:01.229146 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:30:01.345797 systemd[1]: Reloading finished in 471 ms. Dec 13 05:30:01.441027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:30:01.452755 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 05:30:01.467863 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:30:01.472034 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 05:30:01.472404 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:30:01.489179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:30:01.673501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:30:01.673854 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 05:30:01.807200 kubelet[2319]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:30:01.807200 kubelet[2319]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 05:30:01.807200 kubelet[2319]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:30:01.812862 kubelet[2319]: I1213 05:30:01.811209 2319 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 05:30:02.414439 kubelet[2319]: I1213 05:30:02.414367 2319 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 05:30:02.414439 kubelet[2319]: I1213 05:30:02.414421 2319 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 05:30:02.414771 kubelet[2319]: I1213 05:30:02.414738 2319 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 05:30:02.448769 kubelet[2319]: E1213 05:30:02.448702 2319 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.18.182:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:02.475619 kubelet[2319]: I1213 05:30:02.475180 2319 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 05:30:02.497373 kubelet[2319]: I1213 05:30:02.497264 2319 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 05:30:02.498460 kubelet[2319]: I1213 05:30:02.498184 2319 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 05:30:02.500596 kubelet[2319]: I1213 05:30:02.499796 2319 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 05:30:02.500596 kubelet[2319]: I1213 05:30:02.499872 2319 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 05:30:02.500596 kubelet[2319]: I1213 05:30:02.500166 2319 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 05:30:02.504310 kubelet[2319]: I1213 05:30:02.503889 2319 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:30:02.505327 kubelet[2319]: I1213 05:30:02.504519 2319 kubelet.go:396] "Attempting to sync node with API server" Dec 13 05:30:02.505479 kubelet[2319]: I1213 05:30:02.505457 2319 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 05:30:02.506643 kubelet[2319]: I1213 05:30:02.506591 2319 kubelet.go:312] "Adding apiserver pod source" Dec 13 05:30:02.507350 kubelet[2319]: I1213 05:30:02.506781 2319 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 05:30:02.513689 kubelet[2319]: W1213 05:30:02.510915 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.244.18.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-zt5iu.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:02.513689 kubelet[2319]: E1213 05:30:02.511017 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.18.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-zt5iu.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:02.513689 kubelet[2319]: W1213 05:30:02.511808 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.244.18.182:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:02.513689 kubelet[2319]: E1213 05:30:02.511853 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.18.182:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:02.517126 kubelet[2319]: I1213 05:30:02.516454 2319 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 05:30:02.524380 kubelet[2319]: I1213 05:30:02.524284 2319 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 05:30:02.524820 kubelet[2319]: W1213 05:30:02.524799 2319 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 05:30:02.529653 kubelet[2319]: I1213 05:30:02.529596 2319 server.go:1256] "Started kubelet" Dec 13 05:30:02.532970 kubelet[2319]: I1213 05:30:02.532739 2319 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 05:30:02.561715 kubelet[2319]: E1213 05:30:02.552045 2319 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.18.182:6443/api/v1/namespaces/default/events\": dial tcp 10.244.18.182:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-zt5iu.gb1.brightbox.com.1810a57a9a36fec3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-zt5iu.gb1.brightbox.com,UID:srv-zt5iu.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-zt5iu.gb1.brightbox.com,},FirstTimestamp:2024-12-13 05:30:02.529529539 +0000 UTC m=+0.848346618,LastTimestamp:2024-12-13 05:30:02.529529539 +0000 UTC m=+0.848346618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-zt5iu.gb1.brightbox.com,}" Dec 13 05:30:02.561715 kubelet[2319]: I1213 05:30:02.552282 2319 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 05:30:02.561715 kubelet[2319]: I1213 05:30:02.553850 2319 server.go:461] "Adding debug handlers to kubelet server" Dec 13 05:30:02.561715 kubelet[2319]: I1213 05:30:02.555810 2319 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 05:30:02.561715 kubelet[2319]: I1213 05:30:02.556402 2319 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 05:30:02.561715 kubelet[2319]: E1213 05:30:02.560401 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:02.561715 kubelet[2319]: I1213 05:30:02.560470 2319 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 05:30:02.561715 kubelet[2319]: I1213 05:30:02.560687 2319 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 05:30:02.561715 kubelet[2319]: I1213 05:30:02.560839 2319 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 05:30:02.562481 kubelet[2319]: E1213 05:30:02.561020 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.18.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-zt5iu.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.18.182:6443: connect: connection refused" interval="200ms" Dec 13 05:30:02.563765 kubelet[2319]: W1213 05:30:02.562601 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.244.18.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:02.563765 kubelet[2319]: E1213 05:30:02.562684 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.18.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:02.605690 kubelet[2319]: I1213 05:30:02.604846 2319 factory.go:221] Registration of the systemd container factory successfully Dec 13 05:30:02.605690 kubelet[2319]: I1213 05:30:02.605041 2319 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 05:30:02.609899 kubelet[2319]: E1213 05:30:02.609861 2319 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 05:30:02.610353 kubelet[2319]: I1213 05:30:02.610315 2319 factory.go:221] Registration of the containerd container factory successfully Dec 13 05:30:02.661341 kubelet[2319]: I1213 05:30:02.661295 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 05:30:02.664011 kubelet[2319]: I1213 05:30:02.663403 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 05:30:02.664011 kubelet[2319]: I1213 05:30:02.663475 2319 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 05:30:02.664011 kubelet[2319]: I1213 05:30:02.663517 2319 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 05:30:02.664011 kubelet[2319]: E1213 05:30:02.663607 2319 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 05:30:02.677033 kubelet[2319]: W1213 05:30:02.676882 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.244.18.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:02.679263 kubelet[2319]: E1213 05:30:02.679088 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.18.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:02.679769 kubelet[2319]: I1213 05:30:02.678462 2319 kubelet_node_status.go:73] "Attempting to register node" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.681581 kubelet[2319]: E1213 05:30:02.681408 2319 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.18.182:6443/api/v1/nodes\": dial tcp 10.244.18.182:6443: connect: connection refused" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.682499 kubelet[2319]: I1213 05:30:02.681927 2319 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 05:30:02.682499 kubelet[2319]: I1213 05:30:02.681952 2319 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 05:30:02.682499 kubelet[2319]: I1213 05:30:02.682028 2319 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:30:02.684431 kubelet[2319]: I1213 05:30:02.684399 2319 policy_none.go:49] "None policy: Start" Dec 13 05:30:02.685865 kubelet[2319]: I1213 05:30:02.685834 2319 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 05:30:02.686079 kubelet[2319]: I1213 05:30:02.686023 2319 state_mem.go:35] "Initializing new in-memory state store" Dec 13 05:30:02.707204 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 05:30:02.726435 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 05:30:02.734796 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 05:30:02.767311 kubelet[2319]: I1213 05:30:02.757954 2319 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 05:30:02.767311 kubelet[2319]: I1213 05:30:02.766817 2319 topology_manager.go:215] "Topology Admit Handler" podUID="747cbb27462c822e9db8725a60879794" podNamespace="kube-system" podName="kube-apiserver-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.776964 kubelet[2319]: I1213 05:30:02.776911 2319 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 05:30:02.778844 kubelet[2319]: E1213 05:30:02.778817 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.18.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-zt5iu.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.18.182:6443: connect: connection refused" interval="400ms" Dec 13 05:30:02.779425 kubelet[2319]: I1213 05:30:02.779382 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/747cbb27462c822e9db8725a60879794-ca-certs\") pod \"kube-apiserver-srv-zt5iu.gb1.brightbox.com\" (UID: \"747cbb27462c822e9db8725a60879794\") " pod="kube-system/kube-apiserver-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.779591 kubelet[2319]: I1213 05:30:02.779571 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/747cbb27462c822e9db8725a60879794-k8s-certs\") pod \"kube-apiserver-srv-zt5iu.gb1.brightbox.com\" (UID: \"747cbb27462c822e9db8725a60879794\") " pod="kube-system/kube-apiserver-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.779879 kubelet[2319]: I1213 05:30:02.779751 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/747cbb27462c822e9db8725a60879794-usr-share-ca-certificates\") pod \"kube-apiserver-srv-zt5iu.gb1.brightbox.com\" (UID: \"747cbb27462c822e9db8725a60879794\") " pod="kube-system/kube-apiserver-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.783915 kubelet[2319]: I1213 05:30:02.783869 2319 topology_manager.go:215] "Topology Admit Handler" podUID="38d849510ed4b5075706ad0d2cae5f4d" podNamespace="kube-system" podName="kube-controller-manager-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.784809 kubelet[2319]: E1213 05:30:02.784459 2319 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:02.793070 kubelet[2319]: I1213 05:30:02.793004 2319 topology_manager.go:215] "Topology Admit Handler" podUID="97b80d1ac4e0e9dc4382807861fcd2cd" podNamespace="kube-system" podName="kube-scheduler-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.809787 systemd[1]: Created slice kubepods-burstable-pod747cbb27462c822e9db8725a60879794.slice - libcontainer container kubepods-burstable-pod747cbb27462c822e9db8725a60879794.slice. Dec 13 05:30:02.847993 systemd[1]: Created slice kubepods-burstable-pod38d849510ed4b5075706ad0d2cae5f4d.slice - libcontainer container kubepods-burstable-pod38d849510ed4b5075706ad0d2cae5f4d.slice. Dec 13 05:30:02.866667 systemd[1]: Created slice kubepods-burstable-pod97b80d1ac4e0e9dc4382807861fcd2cd.slice - libcontainer container kubepods-burstable-pod97b80d1ac4e0e9dc4382807861fcd2cd.slice. Dec 13 05:30:02.880716 kubelet[2319]: I1213 05:30:02.880335 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38d849510ed4b5075706ad0d2cae5f4d-k8s-certs\") pod \"kube-controller-manager-srv-zt5iu.gb1.brightbox.com\" (UID: \"38d849510ed4b5075706ad0d2cae5f4d\") " pod="kube-system/kube-controller-manager-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.880716 kubelet[2319]: I1213 05:30:02.880417 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38d849510ed4b5075706ad0d2cae5f4d-kubeconfig\") pod \"kube-controller-manager-srv-zt5iu.gb1.brightbox.com\" (UID: \"38d849510ed4b5075706ad0d2cae5f4d\") " pod="kube-system/kube-controller-manager-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.880716 kubelet[2319]: I1213 05:30:02.880540 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38d849510ed4b5075706ad0d2cae5f4d-ca-certs\") pod \"kube-controller-manager-srv-zt5iu.gb1.brightbox.com\" (UID: \"38d849510ed4b5075706ad0d2cae5f4d\") " pod="kube-system/kube-controller-manager-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.880716 kubelet[2319]: I1213 05:30:02.880581 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38d849510ed4b5075706ad0d2cae5f4d-flexvolume-dir\") pod \"kube-controller-manager-srv-zt5iu.gb1.brightbox.com\" (UID: \"38d849510ed4b5075706ad0d2cae5f4d\") " pod="kube-system/kube-controller-manager-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.880716 kubelet[2319]: I1213 05:30:02.880615 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38d849510ed4b5075706ad0d2cae5f4d-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-zt5iu.gb1.brightbox.com\" (UID: \"38d849510ed4b5075706ad0d2cae5f4d\") " pod="kube-system/kube-controller-manager-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.881781 kubelet[2319]: I1213 05:30:02.880684 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97b80d1ac4e0e9dc4382807861fcd2cd-kubeconfig\") pod \"kube-scheduler-srv-zt5iu.gb1.brightbox.com\" (UID: \"97b80d1ac4e0e9dc4382807861fcd2cd\") " pod="kube-system/kube-scheduler-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.886231 kubelet[2319]: I1213 05:30:02.885875 2319 kubelet_node_status.go:73] "Attempting to register node" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:02.886398 kubelet[2319]: E1213 05:30:02.886313 2319 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.18.182:6443/api/v1/nodes\": dial tcp 10.244.18.182:6443: connect: connection refused" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:03.139558 containerd[1511]: time="2024-12-13T05:30:03.138042749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-zt5iu.gb1.brightbox.com,Uid:747cbb27462c822e9db8725a60879794,Namespace:kube-system,Attempt:0,}" Dec 13 05:30:03.161951 containerd[1511]: time="2024-12-13T05:30:03.161850048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-zt5iu.gb1.brightbox.com,Uid:38d849510ed4b5075706ad0d2cae5f4d,Namespace:kube-system,Attempt:0,}" Dec 13 05:30:03.178401 containerd[1511]: time="2024-12-13T05:30:03.178201300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-zt5iu.gb1.brightbox.com,Uid:97b80d1ac4e0e9dc4382807861fcd2cd,Namespace:kube-system,Attempt:0,}" Dec 13 05:30:03.180210 kubelet[2319]: E1213 05:30:03.180114 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.18.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-zt5iu.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.18.182:6443: connect: connection refused" interval="800ms" Dec 13 05:30:03.307993 kubelet[2319]: I1213 05:30:03.306292 2319 kubelet_node_status.go:73] "Attempting to register node" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:03.307993 kubelet[2319]: E1213 05:30:03.306817 2319 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.18.182:6443/api/v1/nodes\": dial tcp 10.244.18.182:6443: connect: connection refused" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:03.450067 kubelet[2319]: W1213 05:30:03.449910 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.244.18.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:03.450067 kubelet[2319]: E1213 05:30:03.450015 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.18.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:03.526964 kubelet[2319]: W1213 05:30:03.526829 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.244.18.182:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:03.526964 kubelet[2319]: E1213 05:30:03.526915 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.18.182:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:03.620429 kubelet[2319]: W1213 05:30:03.620322 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.244.18.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:03.620429 kubelet[2319]: E1213 05:30:03.620384 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.18.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:03.848909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871024934.mount: Deactivated successfully. Dec 13 05:30:03.872946 containerd[1511]: time="2024-12-13T05:30:03.871603763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:30:03.879409 containerd[1511]: time="2024-12-13T05:30:03.879283338Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 05:30:03.884596 containerd[1511]: time="2024-12-13T05:30:03.884450571Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:30:03.888596 containerd[1511]: time="2024-12-13T05:30:03.888321803Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:30:03.893012 containerd[1511]: time="2024-12-13T05:30:03.892932735Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 05:30:03.898517 containerd[1511]: time="2024-12-13T05:30:03.896348799Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:30:03.906032 containerd[1511]: time="2024-12-13T05:30:03.905315145Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 05:30:03.914207 containerd[1511]: time="2024-12-13T05:30:03.914138669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:30:03.916399 containerd[1511]: time="2024-12-13T05:30:03.915760728Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 776.260953ms" Dec 13 05:30:03.921330 containerd[1511]: time="2024-12-13T05:30:03.920358191Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 742.012874ms" Dec 13 05:30:03.938492 containerd[1511]: time="2024-12-13T05:30:03.938322808Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 776.344723ms" Dec 13 05:30:03.950568 kubelet[2319]: W1213 05:30:03.950433 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.244.18.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-zt5iu.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:03.950568 kubelet[2319]: E1213 05:30:03.950527 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.18.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-zt5iu.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:03.986361 kubelet[2319]: E1213 05:30:03.986309 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.18.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-zt5iu.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.18.182:6443: connect: connection refused" interval="1.6s" Dec 13 05:30:04.113398 kubelet[2319]: I1213 05:30:04.113230 2319 kubelet_node_status.go:73] "Attempting to register node" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:04.114774 kubelet[2319]: E1213 05:30:04.114710 2319 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.18.182:6443/api/v1/nodes\": dial tcp 10.244.18.182:6443: connect: connection refused" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:04.235923 containerd[1511]: time="2024-12-13T05:30:04.235771968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:30:04.237424 containerd[1511]: time="2024-12-13T05:30:04.237258773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:30:04.237424 containerd[1511]: time="2024-12-13T05:30:04.237309242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:04.240384 containerd[1511]: time="2024-12-13T05:30:04.239869884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:30:04.240384 containerd[1511]: time="2024-12-13T05:30:04.239936262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:30:04.240384 containerd[1511]: time="2024-12-13T05:30:04.239961612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:04.240384 containerd[1511]: time="2024-12-13T05:30:04.240084186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:04.242492 containerd[1511]: time="2024-12-13T05:30:04.241873001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:04.277231 containerd[1511]: time="2024-12-13T05:30:04.276769512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:30:04.277231 containerd[1511]: time="2024-12-13T05:30:04.276865285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:30:04.277577 containerd[1511]: time="2024-12-13T05:30:04.277014505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:04.277577 containerd[1511]: time="2024-12-13T05:30:04.277198476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:04.301192 systemd[1]: Started cri-containerd-e8a51d36261d71b0e256ae4305ecbd042afe7d065b8c3c53284e05f636969082.scope - libcontainer container e8a51d36261d71b0e256ae4305ecbd042afe7d065b8c3c53284e05f636969082. Dec 13 05:30:04.320700 systemd[1]: Started cri-containerd-1fdcecb3dae0c5ab653b541a461be45a2f2abf4211040c0c7c73777e17c24c7c.scope - libcontainer container 1fdcecb3dae0c5ab653b541a461be45a2f2abf4211040c0c7c73777e17c24c7c. Dec 13 05:30:04.338926 systemd[1]: Started cri-containerd-b78e1622b9c727f33226107adba23982794f9e5d7b7b2f2a95967747e705c0c0.scope - libcontainer container b78e1622b9c727f33226107adba23982794f9e5d7b7b2f2a95967747e705c0c0. Dec 13 05:30:04.448116 containerd[1511]: time="2024-12-13T05:30:04.448060768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-zt5iu.gb1.brightbox.com,Uid:97b80d1ac4e0e9dc4382807861fcd2cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fdcecb3dae0c5ab653b541a461be45a2f2abf4211040c0c7c73777e17c24c7c\"" Dec 13 05:30:04.464648 containerd[1511]: time="2024-12-13T05:30:04.463761143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-zt5iu.gb1.brightbox.com,Uid:747cbb27462c822e9db8725a60879794,Namespace:kube-system,Attempt:0,} returns sandbox id \"b78e1622b9c727f33226107adba23982794f9e5d7b7b2f2a95967747e705c0c0\"" Dec 13 05:30:04.468836 containerd[1511]: time="2024-12-13T05:30:04.468311152Z" level=info msg="CreateContainer within sandbox \"1fdcecb3dae0c5ab653b541a461be45a2f2abf4211040c0c7c73777e17c24c7c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 05:30:04.470536 containerd[1511]: time="2024-12-13T05:30:04.470498928Z" level=info msg="CreateContainer within sandbox \"b78e1622b9c727f33226107adba23982794f9e5d7b7b2f2a95967747e705c0c0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 05:30:04.471345 containerd[1511]: time="2024-12-13T05:30:04.471242558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-zt5iu.gb1.brightbox.com,Uid:38d849510ed4b5075706ad0d2cae5f4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8a51d36261d71b0e256ae4305ecbd042afe7d065b8c3c53284e05f636969082\"" Dec 13 05:30:04.475532 containerd[1511]: time="2024-12-13T05:30:04.475277445Z" level=info msg="CreateContainer within sandbox \"e8a51d36261d71b0e256ae4305ecbd042afe7d065b8c3c53284e05f636969082\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 05:30:04.506499 containerd[1511]: time="2024-12-13T05:30:04.506391766Z" level=info msg="CreateContainer within sandbox \"1fdcecb3dae0c5ab653b541a461be45a2f2abf4211040c0c7c73777e17c24c7c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ab41ebd41c57682d6ab44bc9254aa3f7ed80baf7f45686254143fc5350ffb6e4\"" Dec 13 05:30:04.508995 containerd[1511]: time="2024-12-13T05:30:04.508156643Z" level=info msg="CreateContainer within sandbox \"b78e1622b9c727f33226107adba23982794f9e5d7b7b2f2a95967747e705c0c0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8e4d29072bee17c332a08ac650926bcd190ded5cf17360baad157ec51fc2522a\"" Dec 13 05:30:04.508995 containerd[1511]: time="2024-12-13T05:30:04.508165971Z" level=info msg="CreateContainer within sandbox \"e8a51d36261d71b0e256ae4305ecbd042afe7d065b8c3c53284e05f636969082\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dea5a5dd03e0bea77796ed45a61167bde0bf2aaab77889e9458f7b9c3d7c4ad8\"" Dec 13 05:30:04.511362 containerd[1511]: time="2024-12-13T05:30:04.509720889Z" level=info msg="StartContainer for \"8e4d29072bee17c332a08ac650926bcd190ded5cf17360baad157ec51fc2522a\"" Dec 13 05:30:04.511362 containerd[1511]: time="2024-12-13T05:30:04.510845498Z" level=info msg="StartContainer for \"dea5a5dd03e0bea77796ed45a61167bde0bf2aaab77889e9458f7b9c3d7c4ad8\"" Dec 13 05:30:04.513379 containerd[1511]: time="2024-12-13T05:30:04.513349505Z" level=info msg="StartContainer for \"ab41ebd41c57682d6ab44bc9254aa3f7ed80baf7f45686254143fc5350ffb6e4\"" Dec 13 05:30:04.515838 kubelet[2319]: E1213 05:30:04.515802 2319 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.18.182:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:04.563893 systemd[1]: Started cri-containerd-dea5a5dd03e0bea77796ed45a61167bde0bf2aaab77889e9458f7b9c3d7c4ad8.scope - libcontainer container dea5a5dd03e0bea77796ed45a61167bde0bf2aaab77889e9458f7b9c3d7c4ad8. Dec 13 05:30:04.579100 systemd[1]: Started cri-containerd-ab41ebd41c57682d6ab44bc9254aa3f7ed80baf7f45686254143fc5350ffb6e4.scope - libcontainer container ab41ebd41c57682d6ab44bc9254aa3f7ed80baf7f45686254143fc5350ffb6e4. Dec 13 05:30:04.592956 systemd[1]: Started cri-containerd-8e4d29072bee17c332a08ac650926bcd190ded5cf17360baad157ec51fc2522a.scope - libcontainer container 8e4d29072bee17c332a08ac650926bcd190ded5cf17360baad157ec51fc2522a. Dec 13 05:30:04.760807 containerd[1511]: time="2024-12-13T05:30:04.760283817Z" level=info msg="StartContainer for \"dea5a5dd03e0bea77796ed45a61167bde0bf2aaab77889e9458f7b9c3d7c4ad8\" returns successfully" Dec 13 05:30:04.760807 containerd[1511]: time="2024-12-13T05:30:04.760560071Z" level=info msg="StartContainer for \"ab41ebd41c57682d6ab44bc9254aa3f7ed80baf7f45686254143fc5350ffb6e4\" returns successfully" Dec 13 05:30:04.760807 containerd[1511]: time="2024-12-13T05:30:04.760608964Z" level=info msg="StartContainer for \"8e4d29072bee17c332a08ac650926bcd190ded5cf17360baad157ec51fc2522a\" returns successfully" Dec 13 05:30:05.587022 kubelet[2319]: E1213 05:30:05.586963 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.18.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-zt5iu.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.18.182:6443: connect: connection refused" interval="3.2s" Dec 13 05:30:05.706246 kubelet[2319]: W1213 05:30:05.706121 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.244.18.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:05.706246 kubelet[2319]: E1213 05:30:05.706255 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.18.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:05.720108 kubelet[2319]: I1213 05:30:05.720055 2319 kubelet_node_status.go:73] "Attempting to register node" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:05.720726 kubelet[2319]: E1213 05:30:05.720661 2319 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.18.182:6443/api/v1/nodes\": dial tcp 10.244.18.182:6443: connect: connection refused" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:05.977953 kubelet[2319]: W1213 05:30:05.977804 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.244.18.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:05.977953 kubelet[2319]: E1213 05:30:05.977913 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.18.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:06.037879 kubelet[2319]: W1213 05:30:06.037755 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.244.18.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-zt5iu.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:06.037879 kubelet[2319]: E1213 05:30:06.037839 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.18.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-zt5iu.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.182:6443: connect: connection refused Dec 13 05:30:08.375889 kubelet[2319]: E1213 05:30:08.375811 2319 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "srv-zt5iu.gb1.brightbox.com" not found Dec 13 05:30:08.735138 kubelet[2319]: E1213 05:30:08.735085 2319 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "srv-zt5iu.gb1.brightbox.com" not found Dec 13 05:30:08.794921 kubelet[2319]: E1213 05:30:08.794746 2319 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-zt5iu.gb1.brightbox.com\" not found" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:08.936909 kubelet[2319]: I1213 05:30:08.936845 2319 kubelet_node_status.go:73] "Attempting to register node" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:08.960908 kubelet[2319]: I1213 05:30:08.960665 2319 kubelet_node_status.go:76] "Successfully registered node" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:08.997484 kubelet[2319]: E1213 05:30:08.997209 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:09.100543 kubelet[2319]: E1213 05:30:09.100459 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:09.201420 kubelet[2319]: E1213 05:30:09.201342 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:09.302496 kubelet[2319]: E1213 05:30:09.302213 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:09.403173 kubelet[2319]: E1213 05:30:09.402996 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:09.504080 kubelet[2319]: E1213 05:30:09.504012 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:09.605427 kubelet[2319]: E1213 05:30:09.605180 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:09.706082 kubelet[2319]: E1213 05:30:09.705991 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:09.808030 kubelet[2319]: E1213 05:30:09.807307 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:09.909097 kubelet[2319]: E1213 05:30:09.909015 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:10.009617 kubelet[2319]: E1213 05:30:10.009523 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:10.110666 kubelet[2319]: E1213 05:30:10.110553 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:10.211826 kubelet[2319]: E1213 05:30:10.211141 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:10.312093 kubelet[2319]: E1213 05:30:10.312005 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:10.412999 kubelet[2319]: E1213 05:30:10.412882 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:10.514203 kubelet[2319]: E1213 05:30:10.513508 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:10.615290 kubelet[2319]: E1213 05:30:10.615216 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:10.716329 kubelet[2319]: E1213 05:30:10.716252 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:10.818324 kubelet[2319]: E1213 05:30:10.817222 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:10.917741 kubelet[2319]: E1213 05:30:10.917684 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:10.949884 systemd[1]: Reloading requested from client PID 2597 ('systemctl') (unit session-9.scope)... Dec 13 05:30:10.950461 systemd[1]: Reloading... Dec 13 05:30:11.020069 kubelet[2319]: E1213 05:30:11.018731 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:11.119529 kubelet[2319]: E1213 05:30:11.118895 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:11.148671 zram_generator::config[2636]: No configuration found. Dec 13 05:30:11.219653 kubelet[2319]: E1213 05:30:11.219570 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:11.320972 kubelet[2319]: E1213 05:30:11.320111 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:11.369859 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:30:11.421765 kubelet[2319]: E1213 05:30:11.421709 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:11.499858 systemd[1]: Reloading finished in 548 ms. Dec 13 05:30:11.523193 kubelet[2319]: E1213 05:30:11.523121 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-zt5iu.gb1.brightbox.com\" not found" Dec 13 05:30:11.566913 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:30:11.578671 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 05:30:11.579050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:30:11.579148 systemd[1]: kubelet.service: Consumed 1.170s CPU time, 111.0M memory peak, 0B memory swap peak. Dec 13 05:30:11.589194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:30:11.806057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:30:11.824407 (kubelet)[2700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 05:30:11.948087 kubelet[2700]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:30:11.948087 kubelet[2700]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 05:30:11.948087 kubelet[2700]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:30:11.952751 kubelet[2700]: I1213 05:30:11.951134 2700 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 05:30:11.961467 kubelet[2700]: I1213 05:30:11.961405 2700 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 05:30:11.961467 kubelet[2700]: I1213 05:30:11.961450 2700 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 05:30:11.963143 kubelet[2700]: I1213 05:30:11.962881 2700 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 05:30:11.967608 kubelet[2700]: I1213 05:30:11.966843 2700 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 05:30:11.972103 kubelet[2700]: I1213 05:30:11.972039 2700 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 05:30:11.987900 kubelet[2700]: I1213 05:30:11.987301 2700 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 05:30:11.990048 kubelet[2700]: I1213 05:30:11.989409 2700 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 05:30:11.990742 kubelet[2700]: I1213 05:30:11.990701 2700 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 05:30:11.991949 kubelet[2700]: I1213 05:30:11.991903 2700 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 05:30:11.992139 kubelet[2700]: I1213 05:30:11.992118 2700 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 05:30:11.992443 kubelet[2700]: I1213 05:30:11.992315 2700 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:30:11.992736 kubelet[2700]: I1213 05:30:11.992715 2700 kubelet.go:396] "Attempting to sync node with API server" Dec 13 05:30:11.992862 kubelet[2700]: I1213 05:30:11.992843 2700 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 05:30:11.995042 kubelet[2700]: I1213 05:30:11.994998 2700 kubelet.go:312] "Adding apiserver pod source" Dec 13 05:30:11.995654 kubelet[2700]: I1213 05:30:11.995180 2700 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 05:30:11.996429 kubelet[2700]: I1213 05:30:11.996404 2700 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 05:30:11.996878 kubelet[2700]: I1213 05:30:11.996857 2700 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 05:30:11.997923 kubelet[2700]: I1213 05:30:11.997901 2700 server.go:1256] "Started kubelet" Dec 13 05:30:12.005572 kubelet[2700]: I1213 05:30:12.004422 2700 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 05:30:12.032035 kubelet[2700]: I1213 05:30:12.031969 2700 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 05:30:12.040417 kubelet[2700]: I1213 05:30:12.037934 2700 server.go:461] "Adding debug handlers to kubelet server" Dec 13 05:30:12.041662 kubelet[2700]: I1213 05:30:12.041613 2700 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 05:30:12.042644 kubelet[2700]: I1213 05:30:12.041928 2700 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 05:30:12.046198 kubelet[2700]: I1213 05:30:12.046164 2700 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 05:30:12.052574 kubelet[2700]: I1213 05:30:12.052376 2700 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 05:30:12.052901 kubelet[2700]: I1213 05:30:12.052768 2700 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 05:30:12.060700 kubelet[2700]: I1213 05:30:12.060540 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 05:30:12.063390 kubelet[2700]: I1213 05:30:12.063240 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 05:30:12.063390 kubelet[2700]: I1213 05:30:12.063312 2700 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 05:30:12.063531 kubelet[2700]: I1213 05:30:12.063413 2700 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 05:30:12.063531 kubelet[2700]: E1213 05:30:12.063524 2700 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 05:30:12.086583 kubelet[2700]: I1213 05:30:12.083062 2700 factory.go:221] Registration of the systemd container factory successfully Dec 13 05:30:12.086583 kubelet[2700]: I1213 05:30:12.085142 2700 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 05:30:12.088744 kubelet[2700]: E1213 05:30:12.088704 2700 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 05:30:12.090805 kubelet[2700]: I1213 05:30:12.090778 2700 factory.go:221] Registration of the containerd container factory successfully Dec 13 05:30:12.163685 kubelet[2700]: E1213 05:30:12.163611 2700 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 05:30:12.170435 kubelet[2700]: I1213 05:30:12.170383 2700 kubelet_node_status.go:73] "Attempting to register node" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.186715 kubelet[2700]: I1213 05:30:12.186395 2700 kubelet_node_status.go:112] "Node was previously registered" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.188778 kubelet[2700]: I1213 05:30:12.187654 2700 kubelet_node_status.go:76] "Successfully registered node" node="srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.213485 kubelet[2700]: I1213 05:30:12.213060 2700 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 05:30:12.213485 kubelet[2700]: I1213 05:30:12.213120 2700 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 05:30:12.213485 kubelet[2700]: I1213 05:30:12.213201 2700 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:30:12.214396 kubelet[2700]: I1213 05:30:12.214233 2700 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 05:30:12.214396 kubelet[2700]: I1213 05:30:12.214307 2700 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 05:30:12.214396 kubelet[2700]: I1213 05:30:12.214344 2700 policy_none.go:49] "None policy: Start" Dec 13 05:30:12.226598 kubelet[2700]: I1213 05:30:12.226258 2700 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 05:30:12.227660 kubelet[2700]: I1213 05:30:12.226825 2700 state_mem.go:35] "Initializing new in-memory state store" Dec 13 05:30:12.227660 kubelet[2700]: I1213 05:30:12.227180 2700 state_mem.go:75] "Updated machine memory state" Dec 13 05:30:12.241727 kubelet[2700]: I1213 05:30:12.240474 2700 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 05:30:12.243833 kubelet[2700]: I1213 05:30:12.243795 2700 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 05:30:12.364748 kubelet[2700]: I1213 05:30:12.364471 2700 topology_manager.go:215] "Topology Admit Handler" podUID="747cbb27462c822e9db8725a60879794" podNamespace="kube-system" podName="kube-apiserver-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.365583 kubelet[2700]: I1213 05:30:12.365213 2700 topology_manager.go:215] "Topology Admit Handler" podUID="38d849510ed4b5075706ad0d2cae5f4d" podNamespace="kube-system" podName="kube-controller-manager-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.365583 kubelet[2700]: I1213 05:30:12.365396 2700 topology_manager.go:215] "Topology Admit Handler" podUID="97b80d1ac4e0e9dc4382807861fcd2cd" podNamespace="kube-system" podName="kube-scheduler-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.381883 kubelet[2700]: W1213 05:30:12.380767 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:30:12.383811 kubelet[2700]: W1213 05:30:12.382484 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:30:12.384087 kubelet[2700]: W1213 05:30:12.383451 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:30:12.455330 kubelet[2700]: I1213 05:30:12.454779 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/747cbb27462c822e9db8725a60879794-usr-share-ca-certificates\") pod \"kube-apiserver-srv-zt5iu.gb1.brightbox.com\" (UID: \"747cbb27462c822e9db8725a60879794\") " pod="kube-system/kube-apiserver-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.455330 kubelet[2700]: I1213 05:30:12.454870 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38d849510ed4b5075706ad0d2cae5f4d-flexvolume-dir\") pod \"kube-controller-manager-srv-zt5iu.gb1.brightbox.com\" (UID: \"38d849510ed4b5075706ad0d2cae5f4d\") " pod="kube-system/kube-controller-manager-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.455330 kubelet[2700]: I1213 05:30:12.454907 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38d849510ed4b5075706ad0d2cae5f4d-k8s-certs\") pod \"kube-controller-manager-srv-zt5iu.gb1.brightbox.com\" (UID: \"38d849510ed4b5075706ad0d2cae5f4d\") " pod="kube-system/kube-controller-manager-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.455330 kubelet[2700]: I1213 05:30:12.454953 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38d849510ed4b5075706ad0d2cae5f4d-kubeconfig\") pod \"kube-controller-manager-srv-zt5iu.gb1.brightbox.com\" (UID: \"38d849510ed4b5075706ad0d2cae5f4d\") " pod="kube-system/kube-controller-manager-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.455330 kubelet[2700]: I1213 05:30:12.455011 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97b80d1ac4e0e9dc4382807861fcd2cd-kubeconfig\") pod \"kube-scheduler-srv-zt5iu.gb1.brightbox.com\" (UID: \"97b80d1ac4e0e9dc4382807861fcd2cd\") " pod="kube-system/kube-scheduler-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.456202 kubelet[2700]: I1213 05:30:12.455052 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/747cbb27462c822e9db8725a60879794-ca-certs\") pod \"kube-apiserver-srv-zt5iu.gb1.brightbox.com\" (UID: \"747cbb27462c822e9db8725a60879794\") " pod="kube-system/kube-apiserver-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.456202 kubelet[2700]: I1213 05:30:12.455098 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/747cbb27462c822e9db8725a60879794-k8s-certs\") pod \"kube-apiserver-srv-zt5iu.gb1.brightbox.com\" (UID: \"747cbb27462c822e9db8725a60879794\") " pod="kube-system/kube-apiserver-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.456202 kubelet[2700]: I1213 05:30:12.455150 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38d849510ed4b5075706ad0d2cae5f4d-ca-certs\") pod \"kube-controller-manager-srv-zt5iu.gb1.brightbox.com\" (UID: \"38d849510ed4b5075706ad0d2cae5f4d\") " pod="kube-system/kube-controller-manager-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:12.456202 kubelet[2700]: I1213 05:30:12.455193 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38d849510ed4b5075706ad0d2cae5f4d-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-zt5iu.gb1.brightbox.com\" (UID: \"38d849510ed4b5075706ad0d2cae5f4d\") " pod="kube-system/kube-controller-manager-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:13.016133 kubelet[2700]: I1213 05:30:13.016041 2700 apiserver.go:52] "Watching apiserver" Dec 13 05:30:13.053583 kubelet[2700]: I1213 05:30:13.053431 2700 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 05:30:13.132956 kubelet[2700]: W1213 05:30:13.132903 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:30:13.133217 kubelet[2700]: E1213 05:30:13.133039 2700 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-zt5iu.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-zt5iu.gb1.brightbox.com" Dec 13 05:30:13.241113 kubelet[2700]: I1213 05:30:13.241047 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-zt5iu.gb1.brightbox.com" podStartSLOduration=1.240939791 podStartE2EDuration="1.240939791s" podCreationTimestamp="2024-12-13 05:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:30:13.205771328 +0000 UTC m=+1.354718742" watchObservedRunningTime="2024-12-13 05:30:13.240939791 +0000 UTC m=+1.389887190" Dec 13 05:30:13.278944 kubelet[2700]: I1213 05:30:13.277601 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-zt5iu.gb1.brightbox.com" podStartSLOduration=1.277529208 podStartE2EDuration="1.277529208s" podCreationTimestamp="2024-12-13 05:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:30:13.24215882 +0000 UTC m=+1.391106247" watchObservedRunningTime="2024-12-13 05:30:13.277529208 +0000 UTC m=+1.426476620" Dec 13 05:30:13.306838 kubelet[2700]: I1213 05:30:13.306776 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-zt5iu.gb1.brightbox.com" podStartSLOduration=1.306704881 podStartE2EDuration="1.306704881s" podCreationTimestamp="2024-12-13 05:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:30:13.277941187 +0000 UTC m=+1.426888612" watchObservedRunningTime="2024-12-13 05:30:13.306704881 +0000 UTC m=+1.455652281" Dec 13 05:30:13.525272 sudo[1723]: pam_unix(sudo:session): session closed for user root Dec 13 05:30:13.672196 sshd[1720]: pam_unix(sshd:session): session closed for user core Dec 13 05:30:13.679764 systemd[1]: sshd@6-10.244.18.182:22-147.75.109.163:59526.service: Deactivated successfully. Dec 13 05:30:13.683361 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 05:30:13.683985 systemd[1]: session-9.scope: Consumed 6.243s CPU time, 190.9M memory peak, 0B memory swap peak. Dec 13 05:30:13.686552 systemd-logind[1489]: Session 9 logged out. Waiting for processes to exit. Dec 13 05:30:13.690035 systemd-logind[1489]: Removed session 9. Dec 13 05:30:24.116783 kubelet[2700]: I1213 05:30:24.116721 2700 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 05:30:24.119001 containerd[1511]: time="2024-12-13T05:30:24.118130109Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 05:30:24.119520 kubelet[2700]: I1213 05:30:24.118659 2700 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 05:30:24.773062 kubelet[2700]: I1213 05:30:24.772045 2700 topology_manager.go:215] "Topology Admit Handler" podUID="18a9a12e-5a35-471d-b996-5ddd9772179e" podNamespace="kube-system" podName="kube-proxy-k694q" Dec 13 05:30:24.776487 kubelet[2700]: I1213 05:30:24.776355 2700 topology_manager.go:215] "Topology Admit Handler" podUID="805b9c89-31ad-48ae-8e46-794b9ca9c4b7" podNamespace="kube-flannel" podName="kube-flannel-ds-w7n8j" Dec 13 05:30:24.787229 kubelet[2700]: W1213 05:30:24.787177 2700 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-zt5iu.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-zt5iu.gb1.brightbox.com' and this object Dec 13 05:30:24.787487 kubelet[2700]: E1213 05:30:24.787250 2700 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-zt5iu.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-zt5iu.gb1.brightbox.com' and this object Dec 13 05:30:24.787487 kubelet[2700]: W1213 05:30:24.787374 2700 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-zt5iu.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-zt5iu.gb1.brightbox.com' and this object Dec 13 05:30:24.787487 kubelet[2700]: E1213 05:30:24.787399 2700 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-zt5iu.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-zt5iu.gb1.brightbox.com' and this object Dec 13 05:30:24.787487 kubelet[2700]: W1213 05:30:24.787455 2700 reflector.go:539] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:srv-zt5iu.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'srv-zt5iu.gb1.brightbox.com' and this object Dec 13 05:30:24.787487 kubelet[2700]: E1213 05:30:24.787475 2700 reflector.go:147] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:srv-zt5iu.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'srv-zt5iu.gb1.brightbox.com' and this object Dec 13 05:30:24.787948 kubelet[2700]: W1213 05:30:24.787527 2700 reflector.go:539] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-zt5iu.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'srv-zt5iu.gb1.brightbox.com' and this object Dec 13 05:30:24.787948 kubelet[2700]: E1213 05:30:24.787548 2700 reflector.go:147] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-zt5iu.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'srv-zt5iu.gb1.brightbox.com' and this object Dec 13 05:30:24.796285 systemd[1]: Created slice kubepods-besteffort-pod18a9a12e_5a35_471d_b996_5ddd9772179e.slice - libcontainer container kubepods-besteffort-pod18a9a12e_5a35_471d_b996_5ddd9772179e.slice. Dec 13 05:30:24.813766 systemd[1]: Created slice kubepods-burstable-pod805b9c89_31ad_48ae_8e46_794b9ca9c4b7.slice - libcontainer container kubepods-burstable-pod805b9c89_31ad_48ae_8e46_794b9ca9c4b7.slice. Dec 13 05:30:24.848991 kubelet[2700]: I1213 05:30:24.848300 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18a9a12e-5a35-471d-b996-5ddd9772179e-xtables-lock\") pod \"kube-proxy-k694q\" (UID: \"18a9a12e-5a35-471d-b996-5ddd9772179e\") " pod="kube-system/kube-proxy-k694q" Dec 13 05:30:24.848991 kubelet[2700]: I1213 05:30:24.848375 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/805b9c89-31ad-48ae-8e46-794b9ca9c4b7-xtables-lock\") pod \"kube-flannel-ds-w7n8j\" (UID: \"805b9c89-31ad-48ae-8e46-794b9ca9c4b7\") " pod="kube-flannel/kube-flannel-ds-w7n8j" Dec 13 05:30:24.848991 kubelet[2700]: I1213 05:30:24.848415 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/18a9a12e-5a35-471d-b996-5ddd9772179e-kube-proxy\") pod \"kube-proxy-k694q\" (UID: \"18a9a12e-5a35-471d-b996-5ddd9772179e\") " pod="kube-system/kube-proxy-k694q" Dec 13 05:30:24.848991 kubelet[2700]: I1213 05:30:24.848453 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/805b9c89-31ad-48ae-8e46-794b9ca9c4b7-run\") pod \"kube-flannel-ds-w7n8j\" (UID: \"805b9c89-31ad-48ae-8e46-794b9ca9c4b7\") " pod="kube-flannel/kube-flannel-ds-w7n8j" Dec 13 05:30:24.848991 kubelet[2700]: I1213 05:30:24.848492 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgg5v\" (UniqueName: \"kubernetes.io/projected/18a9a12e-5a35-471d-b996-5ddd9772179e-kube-api-access-bgg5v\") pod \"kube-proxy-k694q\" (UID: \"18a9a12e-5a35-471d-b996-5ddd9772179e\") " pod="kube-system/kube-proxy-k694q" Dec 13 05:30:24.849378 kubelet[2700]: I1213 05:30:24.848527 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/805b9c89-31ad-48ae-8e46-794b9ca9c4b7-cni-plugin\") pod \"kube-flannel-ds-w7n8j\" (UID: \"805b9c89-31ad-48ae-8e46-794b9ca9c4b7\") " pod="kube-flannel/kube-flannel-ds-w7n8j" Dec 13 05:30:24.849378 kubelet[2700]: I1213 05:30:24.848563 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/805b9c89-31ad-48ae-8e46-794b9ca9c4b7-flannel-cfg\") pod \"kube-flannel-ds-w7n8j\" (UID: \"805b9c89-31ad-48ae-8e46-794b9ca9c4b7\") " pod="kube-flannel/kube-flannel-ds-w7n8j" Dec 13 05:30:24.849378 kubelet[2700]: I1213 05:30:24.848605 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/805b9c89-31ad-48ae-8e46-794b9ca9c4b7-cni\") pod \"kube-flannel-ds-w7n8j\" (UID: \"805b9c89-31ad-48ae-8e46-794b9ca9c4b7\") " pod="kube-flannel/kube-flannel-ds-w7n8j" Dec 13 05:30:24.849378 kubelet[2700]: I1213 05:30:24.848694 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6zsv\" (UniqueName: \"kubernetes.io/projected/805b9c89-31ad-48ae-8e46-794b9ca9c4b7-kube-api-access-g6zsv\") pod \"kube-flannel-ds-w7n8j\" (UID: \"805b9c89-31ad-48ae-8e46-794b9ca9c4b7\") " pod="kube-flannel/kube-flannel-ds-w7n8j" Dec 13 05:30:24.849378 kubelet[2700]: I1213 05:30:24.848737 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18a9a12e-5a35-471d-b996-5ddd9772179e-lib-modules\") pod \"kube-proxy-k694q\" (UID: \"18a9a12e-5a35-471d-b996-5ddd9772179e\") " pod="kube-system/kube-proxy-k694q" Dec 13 05:30:25.965366 kubelet[2700]: E1213 05:30:25.965292 2700 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 05:30:25.968190 kubelet[2700]: E1213 05:30:25.966442 2700 projected.go:200] Error preparing data for projected volume kube-api-access-bgg5v for pod kube-system/kube-proxy-k694q: failed to sync configmap cache: timed out waiting for the condition Dec 13 05:30:25.968190 kubelet[2700]: E1213 05:30:25.966338 2700 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 05:30:25.968190 kubelet[2700]: E1213 05:30:25.966537 2700 projected.go:200] Error preparing data for projected volume kube-api-access-g6zsv for pod kube-flannel/kube-flannel-ds-w7n8j: failed to sync configmap cache: timed out waiting for the condition Dec 13 05:30:25.968190 kubelet[2700]: E1213 05:30:25.966651 2700 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18a9a12e-5a35-471d-b996-5ddd9772179e-kube-api-access-bgg5v podName:18a9a12e-5a35-471d-b996-5ddd9772179e nodeName:}" failed. No retries permitted until 2024-12-13 05:30:26.466581137 +0000 UTC m=+14.615528534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bgg5v" (UniqueName: "kubernetes.io/projected/18a9a12e-5a35-471d-b996-5ddd9772179e-kube-api-access-bgg5v") pod "kube-proxy-k694q" (UID: "18a9a12e-5a35-471d-b996-5ddd9772179e") : failed to sync configmap cache: timed out waiting for the condition Dec 13 05:30:25.968190 kubelet[2700]: E1213 05:30:25.966683 2700 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/805b9c89-31ad-48ae-8e46-794b9ca9c4b7-kube-api-access-g6zsv podName:805b9c89-31ad-48ae-8e46-794b9ca9c4b7 nodeName:}" failed. No retries permitted until 2024-12-13 05:30:26.466669694 +0000 UTC m=+14.615617085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g6zsv" (UniqueName: "kubernetes.io/projected/805b9c89-31ad-48ae-8e46-794b9ca9c4b7-kube-api-access-g6zsv") pod "kube-flannel-ds-w7n8j" (UID: "805b9c89-31ad-48ae-8e46-794b9ca9c4b7") : failed to sync configmap cache: timed out waiting for the condition Dec 13 05:30:26.610697 containerd[1511]: time="2024-12-13T05:30:26.609921277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k694q,Uid:18a9a12e-5a35-471d-b996-5ddd9772179e,Namespace:kube-system,Attempt:0,}" Dec 13 05:30:26.619409 containerd[1511]: time="2024-12-13T05:30:26.619353351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-w7n8j,Uid:805b9c89-31ad-48ae-8e46-794b9ca9c4b7,Namespace:kube-flannel,Attempt:0,}" Dec 13 05:30:26.674245 containerd[1511]: time="2024-12-13T05:30:26.673564843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:30:26.676293 containerd[1511]: time="2024-12-13T05:30:26.675758297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:30:26.676293 containerd[1511]: time="2024-12-13T05:30:26.675805217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:26.676293 containerd[1511]: time="2024-12-13T05:30:26.676006291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:26.679877 containerd[1511]: time="2024-12-13T05:30:26.678937091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:30:26.680840 containerd[1511]: time="2024-12-13T05:30:26.680467011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:30:26.680840 containerd[1511]: time="2024-12-13T05:30:26.680563160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:26.680840 containerd[1511]: time="2024-12-13T05:30:26.680732144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:26.735296 systemd[1]: Started cri-containerd-68a8eb9ff8db540b65af2dad79f8e635943f3482a42e4b3064fc12e1d1c8002f.scope - libcontainer container 68a8eb9ff8db540b65af2dad79f8e635943f3482a42e4b3064fc12e1d1c8002f. Dec 13 05:30:26.740497 systemd[1]: Started cri-containerd-fb5070e0c8e9be3ff30906ea00ec55923b00d40c5a00477b3ca4eb182dc30d3a.scope - libcontainer container fb5070e0c8e9be3ff30906ea00ec55923b00d40c5a00477b3ca4eb182dc30d3a. Dec 13 05:30:26.802712 containerd[1511]: time="2024-12-13T05:30:26.802594720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k694q,Uid:18a9a12e-5a35-471d-b996-5ddd9772179e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb5070e0c8e9be3ff30906ea00ec55923b00d40c5a00477b3ca4eb182dc30d3a\"" Dec 13 05:30:26.813856 containerd[1511]: time="2024-12-13T05:30:26.813109942Z" level=info msg="CreateContainer within sandbox \"fb5070e0c8e9be3ff30906ea00ec55923b00d40c5a00477b3ca4eb182dc30d3a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 05:30:26.843602 containerd[1511]: time="2024-12-13T05:30:26.843241614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-w7n8j,Uid:805b9c89-31ad-48ae-8e46-794b9ca9c4b7,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"68a8eb9ff8db540b65af2dad79f8e635943f3482a42e4b3064fc12e1d1c8002f\"" Dec 13 05:30:26.847983 containerd[1511]: time="2024-12-13T05:30:26.847937586Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 05:30:26.855937 containerd[1511]: time="2024-12-13T05:30:26.855862133Z" level=info msg="CreateContainer within sandbox \"fb5070e0c8e9be3ff30906ea00ec55923b00d40c5a00477b3ca4eb182dc30d3a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f98a0d6d51a40bc2827011ee138764cf9106617990e14e969744bdad7155b1fa\"" Dec 13 05:30:26.858072 containerd[1511]: time="2024-12-13T05:30:26.858009709Z" level=info msg="StartContainer for \"f98a0d6d51a40bc2827011ee138764cf9106617990e14e969744bdad7155b1fa\"" Dec 13 05:30:26.905248 systemd[1]: Started cri-containerd-f98a0d6d51a40bc2827011ee138764cf9106617990e14e969744bdad7155b1fa.scope - libcontainer container f98a0d6d51a40bc2827011ee138764cf9106617990e14e969744bdad7155b1fa. Dec 13 05:30:26.956594 containerd[1511]: time="2024-12-13T05:30:26.956215731Z" level=info msg="StartContainer for \"f98a0d6d51a40bc2827011ee138764cf9106617990e14e969744bdad7155b1fa\" returns successfully" Dec 13 05:30:27.181343 kubelet[2700]: I1213 05:30:27.180774 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-k694q" podStartSLOduration=3.179056978 podStartE2EDuration="3.179056978s" podCreationTimestamp="2024-12-13 05:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:30:27.178216403 +0000 UTC m=+15.327163808" watchObservedRunningTime="2024-12-13 05:30:27.179056978 +0000 UTC m=+15.328004395" Dec 13 05:30:29.393573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421094131.mount: Deactivated successfully. Dec 13 05:30:29.453186 containerd[1511]: time="2024-12-13T05:30:29.453104899Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:30:29.454668 containerd[1511]: time="2024-12-13T05:30:29.454473410Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" Dec 13 05:30:29.456158 containerd[1511]: time="2024-12-13T05:30:29.456091795Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:30:29.459760 containerd[1511]: time="2024-12-13T05:30:29.459687667Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:30:29.461193 containerd[1511]: time="2024-12-13T05:30:29.460883933Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.612216968s" Dec 13 05:30:29.461193 containerd[1511]: time="2024-12-13T05:30:29.460935422Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 05:30:29.464468 containerd[1511]: time="2024-12-13T05:30:29.464429296Z" level=info msg="CreateContainer within sandbox \"68a8eb9ff8db540b65af2dad79f8e635943f3482a42e4b3064fc12e1d1c8002f\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 05:30:29.481390 containerd[1511]: time="2024-12-13T05:30:29.481269201Z" level=info msg="CreateContainer within sandbox \"68a8eb9ff8db540b65af2dad79f8e635943f3482a42e4b3064fc12e1d1c8002f\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"3d35e4552b0132cfa7a0bd979f2d3bd4e6f5781b93e46381260dc2785c930ee7\"" Dec 13 05:30:29.483550 containerd[1511]: time="2024-12-13T05:30:29.482475568Z" level=info msg="StartContainer for \"3d35e4552b0132cfa7a0bd979f2d3bd4e6f5781b93e46381260dc2785c930ee7\"" Dec 13 05:30:29.531457 systemd[1]: Started cri-containerd-3d35e4552b0132cfa7a0bd979f2d3bd4e6f5781b93e46381260dc2785c930ee7.scope - libcontainer container 3d35e4552b0132cfa7a0bd979f2d3bd4e6f5781b93e46381260dc2785c930ee7. Dec 13 05:30:29.573318 systemd[1]: cri-containerd-3d35e4552b0132cfa7a0bd979f2d3bd4e6f5781b93e46381260dc2785c930ee7.scope: Deactivated successfully. Dec 13 05:30:29.575396 containerd[1511]: time="2024-12-13T05:30:29.575104545Z" level=info msg="StartContainer for \"3d35e4552b0132cfa7a0bd979f2d3bd4e6f5781b93e46381260dc2785c930ee7\" returns successfully" Dec 13 05:30:29.614687 containerd[1511]: time="2024-12-13T05:30:29.614558335Z" level=info msg="shim disconnected" id=3d35e4552b0132cfa7a0bd979f2d3bd4e6f5781b93e46381260dc2785c930ee7 namespace=k8s.io Dec 13 05:30:29.614687 containerd[1511]: time="2024-12-13T05:30:29.614687486Z" level=warning msg="cleaning up after shim disconnected" id=3d35e4552b0132cfa7a0bd979f2d3bd4e6f5781b93e46381260dc2785c930ee7 namespace=k8s.io Dec 13 05:30:29.614687 containerd[1511]: time="2024-12-13T05:30:29.614709498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:30:29.634715 containerd[1511]: time="2024-12-13T05:30:29.634619616Z" level=warning msg="cleanup warnings time=\"2024-12-13T05:30:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 05:30:30.174257 containerd[1511]: time="2024-12-13T05:30:30.174185606Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 05:30:30.241180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d35e4552b0132cfa7a0bd979f2d3bd4e6f5781b93e46381260dc2785c930ee7-rootfs.mount: Deactivated successfully. Dec 13 05:30:32.664567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007590695.mount: Deactivated successfully. Dec 13 05:30:35.180105 containerd[1511]: time="2024-12-13T05:30:35.178317488Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:30:35.180105 containerd[1511]: time="2024-12-13T05:30:35.179946620Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866357" Dec 13 05:30:35.181035 containerd[1511]: time="2024-12-13T05:30:35.180996141Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:30:35.186952 containerd[1511]: time="2024-12-13T05:30:35.186850306Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:30:35.189692 containerd[1511]: time="2024-12-13T05:30:35.189600143Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 5.015355048s" Dec 13 05:30:35.189692 containerd[1511]: time="2024-12-13T05:30:35.189691051Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 05:30:35.197925 containerd[1511]: time="2024-12-13T05:30:35.197320725Z" level=info msg="CreateContainer within sandbox \"68a8eb9ff8db540b65af2dad79f8e635943f3482a42e4b3064fc12e1d1c8002f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 05:30:35.233143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2503754335.mount: Deactivated successfully. Dec 13 05:30:35.239055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649741888.mount: Deactivated successfully. Dec 13 05:30:35.245080 containerd[1511]: time="2024-12-13T05:30:35.244980324Z" level=info msg="CreateContainer within sandbox \"68a8eb9ff8db540b65af2dad79f8e635943f3482a42e4b3064fc12e1d1c8002f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"913da992fa1b06f2f626dc1faa9b2043fb4cf1f6886c8b8cc58d972f662081f0\"" Dec 13 05:30:35.251999 containerd[1511]: time="2024-12-13T05:30:35.251935541Z" level=info msg="StartContainer for \"913da992fa1b06f2f626dc1faa9b2043fb4cf1f6886c8b8cc58d972f662081f0\"" Dec 13 05:30:35.315061 systemd[1]: Started cri-containerd-913da992fa1b06f2f626dc1faa9b2043fb4cf1f6886c8b8cc58d972f662081f0.scope - libcontainer container 913da992fa1b06f2f626dc1faa9b2043fb4cf1f6886c8b8cc58d972f662081f0. Dec 13 05:30:35.371169 systemd[1]: cri-containerd-913da992fa1b06f2f626dc1faa9b2043fb4cf1f6886c8b8cc58d972f662081f0.scope: Deactivated successfully. Dec 13 05:30:35.379693 containerd[1511]: time="2024-12-13T05:30:35.379509548Z" level=info msg="StartContainer for \"913da992fa1b06f2f626dc1faa9b2043fb4cf1f6886c8b8cc58d972f662081f0\" returns successfully" Dec 13 05:30:35.481009 kubelet[2700]: I1213 05:30:35.480553 2700 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 05:30:35.585678 containerd[1511]: time="2024-12-13T05:30:35.582686617Z" level=info msg="shim disconnected" id=913da992fa1b06f2f626dc1faa9b2043fb4cf1f6886c8b8cc58d972f662081f0 namespace=k8s.io Dec 13 05:30:35.585678 containerd[1511]: time="2024-12-13T05:30:35.582775727Z" level=warning msg="cleaning up after shim disconnected" id=913da992fa1b06f2f626dc1faa9b2043fb4cf1f6886c8b8cc58d972f662081f0 namespace=k8s.io Dec 13 05:30:35.585678 containerd[1511]: time="2024-12-13T05:30:35.582795754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:30:35.645020 kubelet[2700]: I1213 05:30:35.644936 2700 topology_manager.go:215] "Topology Admit Handler" podUID="03eed2e6-77c4-401c-82a8-8180c79dd6bd" podNamespace="kube-system" podName="coredns-76f75df574-h67xt" Dec 13 05:30:35.645253 kubelet[2700]: I1213 05:30:35.645198 2700 topology_manager.go:215] "Topology Admit Handler" podUID="1e7b4c57-b5f4-42b4-a70f-a16fa0067630" podNamespace="kube-system" podName="coredns-76f75df574-75cf4" Dec 13 05:30:35.657544 systemd[1]: Created slice kubepods-burstable-pod1e7b4c57_b5f4_42b4_a70f_a16fa0067630.slice - libcontainer container kubepods-burstable-pod1e7b4c57_b5f4_42b4_a70f_a16fa0067630.slice. Dec 13 05:30:35.684547 systemd[1]: Created slice kubepods-burstable-pod03eed2e6_77c4_401c_82a8_8180c79dd6bd.slice - libcontainer container kubepods-burstable-pod03eed2e6_77c4_401c_82a8_8180c79dd6bd.slice. Dec 13 05:30:35.736391 kubelet[2700]: I1213 05:30:35.735686 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e7b4c57-b5f4-42b4-a70f-a16fa0067630-config-volume\") pod \"coredns-76f75df574-75cf4\" (UID: \"1e7b4c57-b5f4-42b4-a70f-a16fa0067630\") " pod="kube-system/coredns-76f75df574-75cf4" Dec 13 05:30:35.737778 kubelet[2700]: I1213 05:30:35.737740 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03eed2e6-77c4-401c-82a8-8180c79dd6bd-config-volume\") pod \"coredns-76f75df574-h67xt\" (UID: \"03eed2e6-77c4-401c-82a8-8180c79dd6bd\") " pod="kube-system/coredns-76f75df574-h67xt" Dec 13 05:30:35.737881 kubelet[2700]: I1213 05:30:35.737803 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqn2z\" (UniqueName: \"kubernetes.io/projected/03eed2e6-77c4-401c-82a8-8180c79dd6bd-kube-api-access-sqn2z\") pod \"coredns-76f75df574-h67xt\" (UID: \"03eed2e6-77c4-401c-82a8-8180c79dd6bd\") " pod="kube-system/coredns-76f75df574-h67xt" Dec 13 05:30:35.737881 kubelet[2700]: I1213 05:30:35.737844 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlnjl\" (UniqueName: \"kubernetes.io/projected/1e7b4c57-b5f4-42b4-a70f-a16fa0067630-kube-api-access-jlnjl\") pod \"coredns-76f75df574-75cf4\" (UID: \"1e7b4c57-b5f4-42b4-a70f-a16fa0067630\") " pod="kube-system/coredns-76f75df574-75cf4" Dec 13 05:30:35.978577 containerd[1511]: time="2024-12-13T05:30:35.978505979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-75cf4,Uid:1e7b4c57-b5f4-42b4-a70f-a16fa0067630,Namespace:kube-system,Attempt:0,}" Dec 13 05:30:35.994275 containerd[1511]: time="2024-12-13T05:30:35.993826258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h67xt,Uid:03eed2e6-77c4-401c-82a8-8180c79dd6bd,Namespace:kube-system,Attempt:0,}" Dec 13 05:30:36.034091 containerd[1511]: time="2024-12-13T05:30:36.034021152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-75cf4,Uid:1e7b4c57-b5f4-42b4-a70f-a16fa0067630,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6ce4b562898cb8f8aeedb28cf091c313c01d333ae1778c8cf48288ba72ec130\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 05:30:36.035449 kubelet[2700]: E1213 05:30:36.035414 2700 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ce4b562898cb8f8aeedb28cf091c313c01d333ae1778c8cf48288ba72ec130\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 05:30:36.035567 kubelet[2700]: E1213 05:30:36.035516 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ce4b562898cb8f8aeedb28cf091c313c01d333ae1778c8cf48288ba72ec130\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-75cf4" Dec 13 05:30:36.035567 kubelet[2700]: E1213 05:30:36.035559 2700 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ce4b562898cb8f8aeedb28cf091c313c01d333ae1778c8cf48288ba72ec130\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-75cf4" Dec 13 05:30:36.035789 kubelet[2700]: E1213 05:30:36.035685 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-75cf4_kube-system(1e7b4c57-b5f4-42b4-a70f-a16fa0067630)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-75cf4_kube-system(1e7b4c57-b5f4-42b4-a70f-a16fa0067630)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6ce4b562898cb8f8aeedb28cf091c313c01d333ae1778c8cf48288ba72ec130\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-75cf4" podUID="1e7b4c57-b5f4-42b4-a70f-a16fa0067630" Dec 13 05:30:36.052959 containerd[1511]: time="2024-12-13T05:30:36.052768932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h67xt,Uid:03eed2e6-77c4-401c-82a8-8180c79dd6bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"795c5b78be6f2110d2195f2c5cac5b99dd07c4c4dcb051e8cb2fa43847cc14e3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 05:30:36.053244 kubelet[2700]: E1213 05:30:36.053171 2700 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"795c5b78be6f2110d2195f2c5cac5b99dd07c4c4dcb051e8cb2fa43847cc14e3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 05:30:36.053244 kubelet[2700]: E1213 05:30:36.053257 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"795c5b78be6f2110d2195f2c5cac5b99dd07c4c4dcb051e8cb2fa43847cc14e3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-h67xt" Dec 13 05:30:36.053475 kubelet[2700]: E1213 05:30:36.053290 2700 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"795c5b78be6f2110d2195f2c5cac5b99dd07c4c4dcb051e8cb2fa43847cc14e3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-h67xt" Dec 13 05:30:36.053475 kubelet[2700]: E1213 05:30:36.053376 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-h67xt_kube-system(03eed2e6-77c4-401c-82a8-8180c79dd6bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-h67xt_kube-system(03eed2e6-77c4-401c-82a8-8180c79dd6bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"795c5b78be6f2110d2195f2c5cac5b99dd07c4c4dcb051e8cb2fa43847cc14e3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-h67xt" podUID="03eed2e6-77c4-401c-82a8-8180c79dd6bd" Dec 13 05:30:36.195231 containerd[1511]: time="2024-12-13T05:30:36.195049471Z" level=info msg="CreateContainer within sandbox \"68a8eb9ff8db540b65af2dad79f8e635943f3482a42e4b3064fc12e1d1c8002f\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 05:30:36.213794 containerd[1511]: time="2024-12-13T05:30:36.213378162Z" level=info msg="CreateContainer within sandbox \"68a8eb9ff8db540b65af2dad79f8e635943f3482a42e4b3064fc12e1d1c8002f\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"ca6f351893dd57ae86030390470fd5613e7d5c2f53f0445f982c2fef621054bb\"" Dec 13 05:30:36.214919 containerd[1511]: time="2024-12-13T05:30:36.214816150Z" level=info msg="StartContainer for \"ca6f351893dd57ae86030390470fd5613e7d5c2f53f0445f982c2fef621054bb\"" Dec 13 05:30:36.228790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-913da992fa1b06f2f626dc1faa9b2043fb4cf1f6886c8b8cc58d972f662081f0-rootfs.mount: Deactivated successfully. Dec 13 05:30:36.266141 systemd[1]: run-containerd-runc-k8s.io-ca6f351893dd57ae86030390470fd5613e7d5c2f53f0445f982c2fef621054bb-runc.15MEOp.mount: Deactivated successfully. Dec 13 05:30:36.274873 systemd[1]: Started cri-containerd-ca6f351893dd57ae86030390470fd5613e7d5c2f53f0445f982c2fef621054bb.scope - libcontainer container ca6f351893dd57ae86030390470fd5613e7d5c2f53f0445f982c2fef621054bb. Dec 13 05:30:36.318614 containerd[1511]: time="2024-12-13T05:30:36.318443246Z" level=info msg="StartContainer for \"ca6f351893dd57ae86030390470fd5613e7d5c2f53f0445f982c2fef621054bb\" returns successfully" Dec 13 05:30:37.414400 systemd-networkd[1429]: flannel.1: Link UP Dec 13 05:30:37.414431 systemd-networkd[1429]: flannel.1: Gained carrier Dec 13 05:30:39.242944 systemd-networkd[1429]: flannel.1: Gained IPv6LL Dec 13 05:30:48.068110 containerd[1511]: time="2024-12-13T05:30:48.067690340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h67xt,Uid:03eed2e6-77c4-401c-82a8-8180c79dd6bd,Namespace:kube-system,Attempt:0,}" Dec 13 05:30:48.108490 systemd-networkd[1429]: cni0: Link UP Dec 13 05:30:48.108504 systemd-networkd[1429]: cni0: Gained carrier Dec 13 05:30:48.117470 systemd-networkd[1429]: cni0: Lost carrier Dec 13 05:30:48.121861 systemd-networkd[1429]: veth02b4cd84: Link UP Dec 13 05:30:48.128810 kernel: cni0: port 1(veth02b4cd84) entered blocking state Dec 13 05:30:48.129017 kernel: cni0: port 1(veth02b4cd84) entered disabled state Dec 13 05:30:48.135658 kernel: veth02b4cd84: entered allmulticast mode Dec 13 05:30:48.137686 kernel: veth02b4cd84: entered promiscuous mode Dec 13 05:30:48.140261 kernel: cni0: port 1(veth02b4cd84) entered blocking state Dec 13 05:30:48.140361 kernel: cni0: port 1(veth02b4cd84) entered forwarding state Dec 13 05:30:48.140402 kernel: cni0: port 1(veth02b4cd84) entered disabled state Dec 13 05:30:48.155678 kernel: cni0: port 1(veth02b4cd84) entered blocking state Dec 13 05:30:48.155783 kernel: cni0: port 1(veth02b4cd84) entered forwarding state Dec 13 05:30:48.156544 systemd-networkd[1429]: veth02b4cd84: Gained carrier Dec 13 05:30:48.157278 systemd-networkd[1429]: cni0: Gained carrier Dec 13 05:30:48.165090 containerd[1511]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Dec 13 05:30:48.165090 containerd[1511]: delegateAdd: netconf sent to delegate plugin: Dec 13 05:30:48.206046 containerd[1511]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T05:30:48.205885943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:30:48.206921 containerd[1511]: time="2024-12-13T05:30:48.205987988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:30:48.206921 containerd[1511]: time="2024-12-13T05:30:48.206696028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:48.206921 containerd[1511]: time="2024-12-13T05:30:48.206854139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:48.237986 systemd[1]: run-containerd-runc-k8s.io-e4bffb3bc004f3d9ff173898dbd76605c3ef5f11d74bb8f29b25177e6ebbed6c-runc.RyKjlb.mount: Deactivated successfully. Dec 13 05:30:48.255871 systemd[1]: Started cri-containerd-e4bffb3bc004f3d9ff173898dbd76605c3ef5f11d74bb8f29b25177e6ebbed6c.scope - libcontainer container e4bffb3bc004f3d9ff173898dbd76605c3ef5f11d74bb8f29b25177e6ebbed6c. Dec 13 05:30:48.320963 containerd[1511]: time="2024-12-13T05:30:48.320717793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h67xt,Uid:03eed2e6-77c4-401c-82a8-8180c79dd6bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4bffb3bc004f3d9ff173898dbd76605c3ef5f11d74bb8f29b25177e6ebbed6c\"" Dec 13 05:30:48.328193 containerd[1511]: time="2024-12-13T05:30:48.328138214Z" level=info msg="CreateContainer within sandbox \"e4bffb3bc004f3d9ff173898dbd76605c3ef5f11d74bb8f29b25177e6ebbed6c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 05:30:48.354218 containerd[1511]: time="2024-12-13T05:30:48.354130616Z" level=info msg="CreateContainer within sandbox \"e4bffb3bc004f3d9ff173898dbd76605c3ef5f11d74bb8f29b25177e6ebbed6c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c117d7ef71af39aff7bc7ed548462598d2acc233b3514122a267ce58b35ee077\"" Dec 13 05:30:48.357465 containerd[1511]: time="2024-12-13T05:30:48.356307795Z" level=info msg="StartContainer for \"c117d7ef71af39aff7bc7ed548462598d2acc233b3514122a267ce58b35ee077\"" Dec 13 05:30:48.391026 systemd[1]: Started cri-containerd-c117d7ef71af39aff7bc7ed548462598d2acc233b3514122a267ce58b35ee077.scope - libcontainer container c117d7ef71af39aff7bc7ed548462598d2acc233b3514122a267ce58b35ee077. Dec 13 05:30:48.431399 containerd[1511]: time="2024-12-13T05:30:48.431323868Z" level=info msg="StartContainer for \"c117d7ef71af39aff7bc7ed548462598d2acc233b3514122a267ce58b35ee077\" returns successfully" Dec 13 05:30:49.227016 systemd-networkd[1429]: veth02b4cd84: Gained IPv6LL Dec 13 05:30:49.284709 kubelet[2700]: I1213 05:30:49.282715 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-w7n8j" podStartSLOduration=16.935345354 podStartE2EDuration="25.282572918s" podCreationTimestamp="2024-12-13 05:30:24 +0000 UTC" firstStartedPulling="2024-12-13 05:30:26.845858556 +0000 UTC m=+14.994805953" lastFinishedPulling="2024-12-13 05:30:35.193086115 +0000 UTC m=+23.342033517" observedRunningTime="2024-12-13 05:30:37.231872537 +0000 UTC m=+25.380819952" watchObservedRunningTime="2024-12-13 05:30:49.282572918 +0000 UTC m=+37.431520328" Dec 13 05:30:49.284709 kubelet[2700]: I1213 05:30:49.282975 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-h67xt" podStartSLOduration=24.282947297 podStartE2EDuration="24.282947297s" podCreationTimestamp="2024-12-13 05:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:30:49.282230593 +0000 UTC m=+37.431178030" watchObservedRunningTime="2024-12-13 05:30:49.282947297 +0000 UTC m=+37.431894736" Dec 13 05:30:50.066566 containerd[1511]: time="2024-12-13T05:30:50.065969398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-75cf4,Uid:1e7b4c57-b5f4-42b4-a70f-a16fa0067630,Namespace:kube-system,Attempt:0,}" Dec 13 05:30:50.106456 kernel: cni0: port 2(veth100fa481) entered blocking state Dec 13 05:30:50.106749 kernel: cni0: port 2(veth100fa481) entered disabled state Dec 13 05:30:50.107068 systemd-networkd[1429]: veth100fa481: Link UP Dec 13 05:30:50.109243 kernel: veth100fa481: entered allmulticast mode Dec 13 05:30:50.109357 kernel: veth100fa481: entered promiscuous mode Dec 13 05:30:50.120214 kernel: cni0: port 2(veth100fa481) entered blocking state Dec 13 05:30:50.120321 kernel: cni0: port 2(veth100fa481) entered forwarding state Dec 13 05:30:50.120590 systemd-networkd[1429]: veth100fa481: Gained carrier Dec 13 05:30:50.123461 systemd-networkd[1429]: cni0: Gained IPv6LL Dec 13 05:30:50.126812 containerd[1511]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Dec 13 05:30:50.126812 containerd[1511]: delegateAdd: netconf sent to delegate plugin: Dec 13 05:30:50.170312 containerd[1511]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T05:30:50.169741522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:30:50.170312 containerd[1511]: time="2024-12-13T05:30:50.169819721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:30:50.170312 containerd[1511]: time="2024-12-13T05:30:50.169837090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:50.170312 containerd[1511]: time="2024-12-13T05:30:50.169946934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:30:50.204911 systemd[1]: Started cri-containerd-1191ca8be867b8177dec93bbebbeb59f9d00b5672e13acc05b5622b1379a1461.scope - libcontainer container 1191ca8be867b8177dec93bbebbeb59f9d00b5672e13acc05b5622b1379a1461. Dec 13 05:30:50.268554 containerd[1511]: time="2024-12-13T05:30:50.268140579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-75cf4,Uid:1e7b4c57-b5f4-42b4-a70f-a16fa0067630,Namespace:kube-system,Attempt:0,} returns sandbox id \"1191ca8be867b8177dec93bbebbeb59f9d00b5672e13acc05b5622b1379a1461\"" Dec 13 05:30:50.275825 containerd[1511]: time="2024-12-13T05:30:50.275582241Z" level=info msg="CreateContainer within sandbox \"1191ca8be867b8177dec93bbebbeb59f9d00b5672e13acc05b5622b1379a1461\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 05:30:50.293590 containerd[1511]: time="2024-12-13T05:30:50.293492858Z" level=info msg="CreateContainer within sandbox \"1191ca8be867b8177dec93bbebbeb59f9d00b5672e13acc05b5622b1379a1461\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"561f0f1024209603f24d509c8451a732b21730e7d56ec53185286f13a1079b08\"" Dec 13 05:30:50.294735 containerd[1511]: time="2024-12-13T05:30:50.294699438Z" level=info msg="StartContainer for \"561f0f1024209603f24d509c8451a732b21730e7d56ec53185286f13a1079b08\"" Dec 13 05:30:50.336405 systemd[1]: Started cri-containerd-561f0f1024209603f24d509c8451a732b21730e7d56ec53185286f13a1079b08.scope - libcontainer container 561f0f1024209603f24d509c8451a732b21730e7d56ec53185286f13a1079b08. Dec 13 05:30:50.381319 containerd[1511]: time="2024-12-13T05:30:50.381114538Z" level=info msg="StartContainer for \"561f0f1024209603f24d509c8451a732b21730e7d56ec53185286f13a1079b08\" returns successfully" Dec 13 05:30:51.213986 systemd-networkd[1429]: veth100fa481: Gained IPv6LL Dec 13 05:30:51.297766 kubelet[2700]: I1213 05:30:51.296956 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-75cf4" podStartSLOduration=26.296850582 podStartE2EDuration="26.296850582s" podCreationTimestamp="2024-12-13 05:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:30:51.295537292 +0000 UTC m=+39.444484699" watchObservedRunningTime="2024-12-13 05:30:51.296850582 +0000 UTC m=+39.445797986" Dec 13 05:31:18.684069 systemd[1]: Started sshd@7-10.244.18.182:22-147.75.109.163:52494.service - OpenSSH per-connection server daemon (147.75.109.163:52494). Dec 13 05:31:19.590399 sshd[3722]: Accepted publickey for core from 147.75.109.163 port 52494 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:31:19.592853 sshd[3722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:31:19.602365 systemd-logind[1489]: New session 10 of user core. Dec 13 05:31:19.615290 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 05:31:20.333673 sshd[3722]: pam_unix(sshd:session): session closed for user core Dec 13 05:31:20.338347 systemd-logind[1489]: Session 10 logged out. Waiting for processes to exit. Dec 13 05:31:20.338945 systemd[1]: sshd@7-10.244.18.182:22-147.75.109.163:52494.service: Deactivated successfully. Dec 13 05:31:20.341829 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 05:31:20.345230 systemd-logind[1489]: Removed session 10. Dec 13 05:31:25.495120 systemd[1]: Started sshd@8-10.244.18.182:22-147.75.109.163:52508.service - OpenSSH per-connection server daemon (147.75.109.163:52508). Dec 13 05:31:26.389116 sshd[3758]: Accepted publickey for core from 147.75.109.163 port 52508 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:31:26.390036 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:31:26.397314 systemd-logind[1489]: New session 11 of user core. Dec 13 05:31:26.404920 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 05:31:27.102113 sshd[3758]: pam_unix(sshd:session): session closed for user core Dec 13 05:31:27.107684 systemd[1]: sshd@8-10.244.18.182:22-147.75.109.163:52508.service: Deactivated successfully. Dec 13 05:31:27.110618 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 05:31:27.111737 systemd-logind[1489]: Session 11 logged out. Waiting for processes to exit. Dec 13 05:31:27.113794 systemd-logind[1489]: Removed session 11. Dec 13 05:31:32.262370 systemd[1]: Started sshd@9-10.244.18.182:22-147.75.109.163:50366.service - OpenSSH per-connection server daemon (147.75.109.163:50366). Dec 13 05:31:33.153668 sshd[3795]: Accepted publickey for core from 147.75.109.163 port 50366 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:31:33.155406 sshd[3795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:31:33.162354 systemd-logind[1489]: New session 12 of user core. Dec 13 05:31:33.168892 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 05:31:33.856043 sshd[3795]: pam_unix(sshd:session): session closed for user core Dec 13 05:31:33.861932 systemd[1]: sshd@9-10.244.18.182:22-147.75.109.163:50366.service: Deactivated successfully. Dec 13 05:31:33.864255 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 05:31:33.865553 systemd-logind[1489]: Session 12 logged out. Waiting for processes to exit. Dec 13 05:31:33.867885 systemd-logind[1489]: Removed session 12. Dec 13 05:31:34.020533 systemd[1]: Started sshd@10-10.244.18.182:22-147.75.109.163:50368.service - OpenSSH per-connection server daemon (147.75.109.163:50368). Dec 13 05:31:34.907411 sshd[3830]: Accepted publickey for core from 147.75.109.163 port 50368 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:31:34.909587 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:31:34.917924 systemd-logind[1489]: New session 13 of user core. Dec 13 05:31:34.923839 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 05:31:35.687282 sshd[3830]: pam_unix(sshd:session): session closed for user core Dec 13 05:31:35.691761 systemd-logind[1489]: Session 13 logged out. Waiting for processes to exit. Dec 13 05:31:35.694507 systemd[1]: sshd@10-10.244.18.182:22-147.75.109.163:50368.service: Deactivated successfully. Dec 13 05:31:35.697886 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 05:31:35.699717 systemd-logind[1489]: Removed session 13. Dec 13 05:31:35.846094 systemd[1]: Started sshd@11-10.244.18.182:22-147.75.109.163:50382.service - OpenSSH per-connection server daemon (147.75.109.163:50382). Dec 13 05:31:36.726455 sshd[3841]: Accepted publickey for core from 147.75.109.163 port 50382 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:31:36.728558 sshd[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:31:36.735392 systemd-logind[1489]: New session 14 of user core. Dec 13 05:31:36.748909 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 05:31:37.431019 sshd[3841]: pam_unix(sshd:session): session closed for user core Dec 13 05:31:37.439068 systemd[1]: sshd@11-10.244.18.182:22-147.75.109.163:50382.service: Deactivated successfully. Dec 13 05:31:37.441490 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 05:31:37.443595 systemd-logind[1489]: Session 14 logged out. Waiting for processes to exit. Dec 13 05:31:37.445259 systemd-logind[1489]: Removed session 14. Dec 13 05:31:42.594139 systemd[1]: Started sshd@12-10.244.18.182:22-147.75.109.163:59752.service - OpenSSH per-connection server daemon (147.75.109.163:59752). Dec 13 05:31:43.483656 sshd[3874]: Accepted publickey for core from 147.75.109.163 port 59752 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:31:43.485971 sshd[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:31:43.494308 systemd-logind[1489]: New session 15 of user core. Dec 13 05:31:43.505018 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 05:31:44.190189 sshd[3874]: pam_unix(sshd:session): session closed for user core Dec 13 05:31:44.195444 systemd[1]: sshd@12-10.244.18.182:22-147.75.109.163:59752.service: Deactivated successfully. Dec 13 05:31:44.196046 systemd-logind[1489]: Session 15 logged out. Waiting for processes to exit. Dec 13 05:31:44.199597 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 05:31:44.202560 systemd-logind[1489]: Removed session 15. Dec 13 05:31:44.349066 systemd[1]: Started sshd@13-10.244.18.182:22-147.75.109.163:59764.service - OpenSSH per-connection server daemon (147.75.109.163:59764). Dec 13 05:31:45.238293 sshd[3908]: Accepted publickey for core from 147.75.109.163 port 59764 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:31:45.240800 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:31:45.248301 systemd-logind[1489]: New session 16 of user core. Dec 13 05:31:45.255447 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 05:31:46.236864 sshd[3908]: pam_unix(sshd:session): session closed for user core Dec 13 05:31:46.243103 systemd[1]: sshd@13-10.244.18.182:22-147.75.109.163:59764.service: Deactivated successfully. Dec 13 05:31:46.245610 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 05:31:46.247128 systemd-logind[1489]: Session 16 logged out. Waiting for processes to exit. Dec 13 05:31:46.248647 systemd-logind[1489]: Removed session 16. Dec 13 05:31:46.399035 systemd[1]: Started sshd@14-10.244.18.182:22-147.75.109.163:59998.service - OpenSSH per-connection server daemon (147.75.109.163:59998). Dec 13 05:31:47.287747 sshd[3919]: Accepted publickey for core from 147.75.109.163 port 59998 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:31:47.292004 sshd[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:31:47.298773 systemd-logind[1489]: New session 17 of user core. Dec 13 05:31:47.308951 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 05:31:49.601472 sshd[3919]: pam_unix(sshd:session): session closed for user core Dec 13 05:31:49.606751 systemd[1]: sshd@14-10.244.18.182:22-147.75.109.163:59998.service: Deactivated successfully. Dec 13 05:31:49.609593 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 05:31:49.610972 systemd-logind[1489]: Session 17 logged out. Waiting for processes to exit. Dec 13 05:31:49.612978 systemd-logind[1489]: Removed session 17. Dec 13 05:31:49.758100 systemd[1]: Started sshd@15-10.244.18.182:22-147.75.109.163:60012.service - OpenSSH per-connection server daemon (147.75.109.163:60012). Dec 13 05:31:50.654356 sshd[3958]: Accepted publickey for core from 147.75.109.163 port 60012 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:31:50.656699 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:31:50.663761 systemd-logind[1489]: New session 18 of user core. Dec 13 05:31:50.668880 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 05:31:51.524695 sshd[3958]: pam_unix(sshd:session): session closed for user core Dec 13 05:31:51.530301 systemd[1]: sshd@15-10.244.18.182:22-147.75.109.163:60012.service: Deactivated successfully. Dec 13 05:31:51.533048 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 05:31:51.534568 systemd-logind[1489]: Session 18 logged out. Waiting for processes to exit. Dec 13 05:31:51.536345 systemd-logind[1489]: Removed session 18. Dec 13 05:31:51.690066 systemd[1]: Started sshd@16-10.244.18.182:22-147.75.109.163:60014.service - OpenSSH per-connection server daemon (147.75.109.163:60014). Dec 13 05:31:52.579363 sshd[3969]: Accepted publickey for core from 147.75.109.163 port 60014 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:31:52.580425 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:31:52.593196 systemd-logind[1489]: New session 19 of user core. Dec 13 05:31:52.600920 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 05:31:53.280955 sshd[3969]: pam_unix(sshd:session): session closed for user core Dec 13 05:31:53.286281 systemd[1]: sshd@16-10.244.18.182:22-147.75.109.163:60014.service: Deactivated successfully. Dec 13 05:31:53.288837 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 05:31:53.289904 systemd-logind[1489]: Session 19 logged out. Waiting for processes to exit. Dec 13 05:31:53.291768 systemd-logind[1489]: Removed session 19. Dec 13 05:31:58.439845 systemd[1]: Started sshd@17-10.244.18.182:22-147.75.109.163:43944.service - OpenSSH per-connection server daemon (147.75.109.163:43944). Dec 13 05:31:59.337709 sshd[4029]: Accepted publickey for core from 147.75.109.163 port 43944 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:31:59.338742 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:31:59.345856 systemd-logind[1489]: New session 20 of user core. Dec 13 05:31:59.358898 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 05:32:00.043086 sshd[4029]: pam_unix(sshd:session): session closed for user core Dec 13 05:32:00.048781 systemd[1]: sshd@17-10.244.18.182:22-147.75.109.163:43944.service: Deactivated successfully. Dec 13 05:32:00.052083 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 05:32:00.053705 systemd-logind[1489]: Session 20 logged out. Waiting for processes to exit. Dec 13 05:32:00.055708 systemd-logind[1489]: Removed session 20. Dec 13 05:32:05.199581 systemd[1]: Started sshd@18-10.244.18.182:22-147.75.109.163:43954.service - OpenSSH per-connection server daemon (147.75.109.163:43954). Dec 13 05:32:06.096430 sshd[4062]: Accepted publickey for core from 147.75.109.163 port 43954 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:32:06.098657 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:32:06.106752 systemd-logind[1489]: New session 21 of user core. Dec 13 05:32:06.113894 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 05:32:06.807612 sshd[4062]: pam_unix(sshd:session): session closed for user core Dec 13 05:32:06.813748 systemd-logind[1489]: Session 21 logged out. Waiting for processes to exit. Dec 13 05:32:06.815288 systemd[1]: sshd@18-10.244.18.182:22-147.75.109.163:43954.service: Deactivated successfully. Dec 13 05:32:06.818876 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 05:32:06.821360 systemd-logind[1489]: Removed session 21. Dec 13 05:32:11.966459 systemd[1]: Started sshd@19-10.244.18.182:22-147.75.109.163:60288.service - OpenSSH per-connection server daemon (147.75.109.163:60288). Dec 13 05:32:12.858732 sshd[4096]: Accepted publickey for core from 147.75.109.163 port 60288 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:32:12.860862 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:32:12.869326 systemd-logind[1489]: New session 22 of user core. Dec 13 05:32:12.873825 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 05:32:13.556288 sshd[4096]: pam_unix(sshd:session): session closed for user core Dec 13 05:32:13.562366 systemd-logind[1489]: Session 22 logged out. Waiting for processes to exit. Dec 13 05:32:13.562707 systemd[1]: sshd@19-10.244.18.182:22-147.75.109.163:60288.service: Deactivated successfully. Dec 13 05:32:13.565439 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 05:32:13.567154 systemd-logind[1489]: Removed session 22.