Dec 13 04:51:08.081525 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 04:51:08.081562 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 04:51:08.081577 kernel: BIOS-provided physical RAM map: Dec 13 04:51:08.081593 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 04:51:08.081603 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 04:51:08.081613 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 04:51:08.081625 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 04:51:08.081636 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 04:51:08.081659 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 04:51:08.081671 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 04:51:08.081682 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 04:51:08.081692 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 04:51:08.081709 kernel: NX (Execute Disable) protection: active Dec 13 04:51:08.081720 kernel: APIC: Static calls initialized Dec 13 04:51:08.081732 kernel: SMBIOS 2.8 present. Dec 13 04:51:08.081745 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 04:51:08.081756 kernel: Hypervisor detected: KVM Dec 13 04:51:08.081772 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 04:51:08.081784 kernel: kvm-clock: using sched offset of 4354345856 cycles Dec 13 04:51:08.081796 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 04:51:08.081808 kernel: tsc: Detected 2499.998 MHz processor Dec 13 04:51:08.081820 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 04:51:08.081832 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 04:51:08.081844 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 04:51:08.081855 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 04:51:08.081867 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 04:51:08.081883 kernel: Using GB pages for direct mapping Dec 13 04:51:08.081895 kernel: ACPI: Early table checksum verification disabled Dec 13 04:51:08.081906 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 04:51:08.081918 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:51:08.081929 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:51:08.081941 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:51:08.081952 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 04:51:08.081964 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:51:08.081975 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:51:08.081991 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:51:08.082003 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:51:08.082015 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 04:51:08.082026 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 04:51:08.082038 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 04:51:08.082055 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 04:51:08.082067 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 04:51:08.082084 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 04:51:08.082096 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 04:51:08.082108 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 04:51:08.082120 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 04:51:08.082132 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 04:51:08.082144 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 04:51:08.082156 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 04:51:08.082172 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 04:51:08.082184 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 04:51:08.082196 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 04:51:08.082208 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 04:51:08.082220 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 04:51:08.082251 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 04:51:08.082264 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 04:51:08.082276 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 04:51:08.082288 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 04:51:08.082300 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 04:51:08.082318 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 04:51:08.082331 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 04:51:08.082343 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 04:51:08.082355 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 04:51:08.082367 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 04:51:08.082380 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 04:51:08.082392 kernel: Zone ranges: Dec 13 04:51:08.082404 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 04:51:08.082416 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 04:51:08.082433 kernel: Normal empty Dec 13 04:51:08.082445 kernel: Movable zone start for each node Dec 13 04:51:08.082457 kernel: Early memory node ranges Dec 13 04:51:08.082469 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 04:51:08.082481 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 04:51:08.082494 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 04:51:08.082506 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 04:51:08.082518 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 04:51:08.082531 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 04:51:08.082543 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 04:51:08.082560 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 04:51:08.082572 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 04:51:08.082584 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 04:51:08.082596 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 04:51:08.082608 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 04:51:08.082620 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 04:51:08.082632 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 04:51:08.082654 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 04:51:08.082667 kernel: TSC deadline timer available Dec 13 04:51:08.082684 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 04:51:08.082697 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 04:51:08.082709 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 04:51:08.082721 kernel: Booting paravirtualized kernel on KVM Dec 13 04:51:08.082733 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 04:51:08.082746 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 13 04:51:08.082758 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 04:51:08.082770 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 04:51:08.082782 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 04:51:08.082799 kernel: kvm-guest: PV spinlocks enabled Dec 13 04:51:08.082811 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 04:51:08.082824 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 04:51:08.082837 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 04:51:08.082849 kernel: random: crng init done Dec 13 04:51:08.082861 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 04:51:08.082873 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 04:51:08.082885 kernel: Fallback order for Node 0: 0 Dec 13 04:51:08.082902 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 04:51:08.082915 kernel: Policy zone: DMA32 Dec 13 04:51:08.082927 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 04:51:08.082939 kernel: software IO TLB: area num 16. Dec 13 04:51:08.082951 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 194828K reserved, 0K cma-reserved) Dec 13 04:51:08.082963 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 04:51:08.082975 kernel: Kernel/User page tables isolation: enabled Dec 13 04:51:08.082987 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 04:51:08.082999 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 04:51:08.083016 kernel: Dynamic Preempt: voluntary Dec 13 04:51:08.083028 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 04:51:08.083041 kernel: rcu: RCU event tracing is enabled. Dec 13 04:51:08.083054 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 04:51:08.083066 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 04:51:08.083091 kernel: Rude variant of Tasks RCU enabled. Dec 13 04:51:08.083108 kernel: Tracing variant of Tasks RCU enabled. Dec 13 04:51:08.083120 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 04:51:08.083133 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 04:51:08.083146 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 04:51:08.083158 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 04:51:08.083171 kernel: Console: colour VGA+ 80x25 Dec 13 04:51:08.083188 kernel: printk: console [tty0] enabled Dec 13 04:51:08.083201 kernel: printk: console [ttyS0] enabled Dec 13 04:51:08.083214 kernel: ACPI: Core revision 20230628 Dec 13 04:51:08.085252 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 04:51:08.085277 kernel: x2apic enabled Dec 13 04:51:08.085298 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 04:51:08.085312 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 04:51:08.085325 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 04:51:08.085338 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 04:51:08.085351 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 04:51:08.085364 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 04:51:08.085377 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 04:51:08.085389 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 04:51:08.085402 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 04:51:08.085419 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 04:51:08.085432 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 04:51:08.085445 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 04:51:08.085458 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 04:51:08.085470 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 04:51:08.085483 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 04:51:08.085495 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 04:51:08.085508 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 04:51:08.085521 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 04:51:08.085533 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 04:51:08.085546 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 04:51:08.085563 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 04:51:08.085576 kernel: Freeing SMP alternatives memory: 32K Dec 13 04:51:08.085589 kernel: pid_max: default: 32768 minimum: 301 Dec 13 04:51:08.085602 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 04:51:08.085614 kernel: landlock: Up and running. Dec 13 04:51:08.085627 kernel: SELinux: Initializing. Dec 13 04:51:08.085640 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 04:51:08.085666 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 04:51:08.085679 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 04:51:08.085692 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 04:51:08.085705 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 04:51:08.085723 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 04:51:08.085737 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 04:51:08.085750 kernel: signal: max sigframe size: 1776 Dec 13 04:51:08.085763 kernel: rcu: Hierarchical SRCU implementation. Dec 13 04:51:08.085776 kernel: rcu: Max phase no-delay instances is 400. Dec 13 04:51:08.085789 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 04:51:08.085802 kernel: smp: Bringing up secondary CPUs ... Dec 13 04:51:08.085815 kernel: smpboot: x86: Booting SMP configuration: Dec 13 04:51:08.085828 kernel: .... node #0, CPUs: #1 Dec 13 04:51:08.085846 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 04:51:08.085859 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 04:51:08.085871 kernel: smpboot: Max logical packages: 16 Dec 13 04:51:08.085884 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 04:51:08.085897 kernel: devtmpfs: initialized Dec 13 04:51:08.085910 kernel: x86/mm: Memory block size: 128MB Dec 13 04:51:08.085923 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 04:51:08.085936 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 04:51:08.085948 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 04:51:08.085966 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 04:51:08.085979 kernel: audit: initializing netlink subsys (disabled) Dec 13 04:51:08.085992 kernel: audit: type=2000 audit(1734065466.329:1): state=initialized audit_enabled=0 res=1 Dec 13 04:51:08.086004 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 04:51:08.086017 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 04:51:08.086030 kernel: cpuidle: using governor menu Dec 13 04:51:08.086043 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 04:51:08.086056 kernel: dca service started, version 1.12.1 Dec 13 04:51:08.086069 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 04:51:08.086086 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 04:51:08.086100 kernel: PCI: Using configuration type 1 for base access Dec 13 04:51:08.086113 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 04:51:08.086125 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 04:51:08.086138 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 04:51:08.086151 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 04:51:08.086164 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 04:51:08.086177 kernel: ACPI: Added _OSI(Module Device) Dec 13 04:51:08.086189 kernel: ACPI: Added _OSI(Processor Device) Dec 13 04:51:08.086207 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 04:51:08.086220 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 04:51:08.086247 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 04:51:08.086261 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 04:51:08.086274 kernel: ACPI: Interpreter enabled Dec 13 04:51:08.086287 kernel: ACPI: PM: (supports S0 S5) Dec 13 04:51:08.086300 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 04:51:08.086313 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 04:51:08.086326 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 04:51:08.086345 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 04:51:08.086357 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 04:51:08.086615 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 04:51:08.086812 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 04:51:08.086978 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 04:51:08.086997 kernel: PCI host bridge to bus 0000:00 Dec 13 04:51:08.087184 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 04:51:08.091680 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 04:51:08.091849 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 04:51:08.092001 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 04:51:08.092148 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 04:51:08.092316 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 04:51:08.092466 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 04:51:08.092666 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 04:51:08.092861 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 04:51:08.093028 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 04:51:08.093195 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 04:51:08.093387 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 04:51:08.093553 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 04:51:08.093747 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 04:51:08.093922 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 04:51:08.094103 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 04:51:08.094286 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 04:51:08.094465 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 04:51:08.094632 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 04:51:08.094824 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 04:51:08.095000 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 04:51:08.095190 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 04:51:08.095376 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 04:51:08.095555 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 04:51:08.095736 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 04:51:08.095913 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 04:51:08.096089 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 04:51:08.096288 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 04:51:08.096456 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 04:51:08.096636 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 04:51:08.096815 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 04:51:08.096980 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 04:51:08.097143 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 04:51:08.097344 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 04:51:08.097518 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 04:51:08.097693 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 04:51:08.097852 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 04:51:08.098018 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 04:51:08.098218 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 04:51:08.098409 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 04:51:08.098595 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 04:51:08.098773 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 04:51:08.098939 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 04:51:08.099117 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 04:51:08.099394 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 04:51:08.099628 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 04:51:08.099831 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 04:51:08.099997 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 04:51:08.100158 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 04:51:08.100337 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 04:51:08.100517 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 04:51:08.100718 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 04:51:08.100903 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 04:51:08.101088 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 04:51:08.101293 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 04:51:08.101474 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 04:51:08.101651 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 04:51:08.101822 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 04:51:08.101985 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 04:51:08.102157 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 04:51:08.102426 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 04:51:08.102595 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 04:51:08.102772 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 04:51:08.102932 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 04:51:08.103092 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 04:51:08.103272 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 04:51:08.103434 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 04:51:08.103604 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 04:51:08.103784 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 04:51:08.103945 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 04:51:08.104104 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 04:51:08.104338 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 04:51:08.104501 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 04:51:08.104671 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 04:51:08.104854 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 04:51:08.105113 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 04:51:08.105313 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 04:51:08.105482 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 04:51:08.105642 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 04:51:08.105903 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 04:51:08.105924 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 04:51:08.105938 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 04:51:08.105951 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 04:51:08.105980 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 04:51:08.105993 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 04:51:08.106007 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 04:51:08.106020 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 04:51:08.106034 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 04:51:08.106047 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 04:51:08.106060 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 04:51:08.106073 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 04:51:08.106086 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 04:51:08.106104 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 04:51:08.106118 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 04:51:08.106130 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 04:51:08.106144 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 04:51:08.106156 kernel: iommu: Default domain type: Translated Dec 13 04:51:08.106170 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 04:51:08.106183 kernel: PCI: Using ACPI for IRQ routing Dec 13 04:51:08.106196 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 04:51:08.106209 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 04:51:08.106279 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 04:51:08.106447 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 04:51:08.106606 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 04:51:08.106777 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 04:51:08.106797 kernel: vgaarb: loaded Dec 13 04:51:08.106810 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 04:51:08.106824 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 04:51:08.106837 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 04:51:08.106858 kernel: pnp: PnP ACPI init Dec 13 04:51:08.107028 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 04:51:08.107049 kernel: pnp: PnP ACPI: found 5 devices Dec 13 04:51:08.107063 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 04:51:08.107076 kernel: NET: Registered PF_INET protocol family Dec 13 04:51:08.107089 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 04:51:08.107103 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 04:51:08.107116 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 04:51:08.107129 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 04:51:08.107149 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 04:51:08.107162 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 04:51:08.107176 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 04:51:08.107189 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 04:51:08.107202 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 04:51:08.107215 kernel: NET: Registered PF_XDP protocol family Dec 13 04:51:08.107404 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 04:51:08.107569 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 04:51:08.107765 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 04:51:08.107950 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 04:51:08.108113 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 04:51:08.108327 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 04:51:08.108491 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 04:51:08.108664 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 04:51:08.108836 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 04:51:08.108998 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 04:51:08.109158 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 04:51:08.109336 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 04:51:08.109499 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 04:51:08.109675 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 04:51:08.109839 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 04:51:08.110011 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 04:51:08.110209 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 04:51:08.110470 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 04:51:08.110633 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 04:51:08.110807 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 04:51:08.110967 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 04:51:08.111126 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 04:51:08.111305 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 04:51:08.111466 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 04:51:08.111635 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 04:51:08.111808 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 04:51:08.111971 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 04:51:08.112148 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 04:51:08.112340 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 04:51:08.112524 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 04:51:08.112718 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 04:51:08.112894 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 04:51:08.113071 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 04:51:08.113310 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 04:51:08.113480 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 04:51:08.113666 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 04:51:08.113828 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 04:51:08.113988 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 04:51:08.114150 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 04:51:08.114346 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 04:51:08.114510 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 04:51:08.114685 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 04:51:08.114850 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 04:51:08.115014 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 04:51:08.115184 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 04:51:08.115404 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 04:51:08.115569 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 04:51:08.115744 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 04:51:08.115906 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 04:51:08.116066 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 04:51:08.116222 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 04:51:08.116388 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 04:51:08.116558 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 04:51:08.116756 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 04:51:08.116908 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 04:51:08.117055 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 04:51:08.117300 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 04:51:08.117460 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 04:51:08.117613 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 04:51:08.117818 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 04:51:08.117984 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 04:51:08.118136 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 04:51:08.118305 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 04:51:08.118469 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 04:51:08.118622 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 04:51:08.118787 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 04:51:08.118959 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 04:51:08.119118 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 04:51:08.119336 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 04:51:08.119511 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 04:51:08.119677 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 04:51:08.119831 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 04:51:08.119994 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 04:51:08.120155 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 04:51:08.120324 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 04:51:08.120490 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 04:51:08.120654 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 04:51:08.120812 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 04:51:08.120979 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 04:51:08.121134 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 04:51:08.121361 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 04:51:08.121383 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 04:51:08.121398 kernel: PCI: CLS 0 bytes, default 64 Dec 13 04:51:08.121412 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 04:51:08.121425 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 04:51:08.121440 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 04:51:08.121454 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 04:51:08.121468 kernel: Initialise system trusted keyrings Dec 13 04:51:08.121489 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 04:51:08.121503 kernel: Key type asymmetric registered Dec 13 04:51:08.121517 kernel: Asymmetric key parser 'x509' registered Dec 13 04:51:08.121530 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 04:51:08.121543 kernel: io scheduler mq-deadline registered Dec 13 04:51:08.121557 kernel: io scheduler kyber registered Dec 13 04:51:08.121571 kernel: io scheduler bfq registered Dec 13 04:51:08.121749 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 04:51:08.121915 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 04:51:08.122085 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:51:08.122267 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 04:51:08.122431 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 04:51:08.122591 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:51:08.122770 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 04:51:08.122932 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 04:51:08.123102 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:51:08.123327 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 04:51:08.123491 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 04:51:08.123662 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:51:08.123828 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 04:51:08.123988 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 04:51:08.124158 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:51:08.124339 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 04:51:08.124503 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 04:51:08.124686 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:51:08.124853 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 04:51:08.125017 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 04:51:08.125189 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:51:08.125416 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 04:51:08.125579 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 04:51:08.125821 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 04:51:08.125846 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 04:51:08.125862 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 04:51:08.125883 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 04:51:08.125897 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 04:51:08.125911 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 04:51:08.125925 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 04:51:08.125939 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 04:51:08.125952 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 04:51:08.126140 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 04:51:08.126162 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 04:51:08.126336 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 04:51:08.126489 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T04:51:07 UTC (1734065467) Dec 13 04:51:08.126641 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 04:51:08.126674 kernel: intel_pstate: CPU model not supported Dec 13 04:51:08.126688 kernel: NET: Registered PF_INET6 protocol family Dec 13 04:51:08.126701 kernel: Segment Routing with IPv6 Dec 13 04:51:08.126715 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 04:51:08.126728 kernel: NET: Registered PF_PACKET protocol family Dec 13 04:51:08.126741 kernel: Key type dns_resolver registered Dec 13 04:51:08.126764 kernel: IPI shorthand broadcast: enabled Dec 13 04:51:08.126777 kernel: sched_clock: Marking stable (1306004197, 240295343)->(1691267962, -144968422) Dec 13 04:51:08.126792 kernel: registered taskstats version 1 Dec 13 04:51:08.126806 kernel: Loading compiled-in X.509 certificates Dec 13 04:51:08.126819 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 04:51:08.126833 kernel: Key type .fscrypt registered Dec 13 04:51:08.126846 kernel: Key type fscrypt-provisioning registered Dec 13 04:51:08.126860 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 04:51:08.126873 kernel: ima: Allocated hash algorithm: sha1 Dec 13 04:51:08.126892 kernel: ima: No architecture policies found Dec 13 04:51:08.126906 kernel: clk: Disabling unused clocks Dec 13 04:51:08.126920 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 04:51:08.126933 kernel: Write protecting the kernel read-only data: 36864k Dec 13 04:51:08.126947 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 04:51:08.126961 kernel: Run /init as init process Dec 13 04:51:08.126975 kernel: with arguments: Dec 13 04:51:08.126988 kernel: /init Dec 13 04:51:08.127002 kernel: with environment: Dec 13 04:51:08.127020 kernel: HOME=/ Dec 13 04:51:08.127034 kernel: TERM=linux Dec 13 04:51:08.127047 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 04:51:08.127064 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 04:51:08.127081 systemd[1]: Detected virtualization kvm. Dec 13 04:51:08.127096 systemd[1]: Detected architecture x86-64. Dec 13 04:51:08.127110 systemd[1]: Running in initrd. Dec 13 04:51:08.127129 systemd[1]: No hostname configured, using default hostname. Dec 13 04:51:08.127143 systemd[1]: Hostname set to . Dec 13 04:51:08.127158 systemd[1]: Initializing machine ID from VM UUID. Dec 13 04:51:08.127172 systemd[1]: Queued start job for default target initrd.target. Dec 13 04:51:08.127186 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 04:51:08.127201 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 04:51:08.127216 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 04:51:08.127290 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 04:51:08.127312 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 04:51:08.127327 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 04:51:08.127344 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 04:51:08.127359 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 04:51:08.127374 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 04:51:08.127389 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 04:51:08.127403 systemd[1]: Reached target paths.target - Path Units. Dec 13 04:51:08.127422 systemd[1]: Reached target slices.target - Slice Units. Dec 13 04:51:08.127437 systemd[1]: Reached target swap.target - Swaps. Dec 13 04:51:08.127452 systemd[1]: Reached target timers.target - Timer Units. Dec 13 04:51:08.127466 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 04:51:08.127481 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 04:51:08.127495 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 04:51:08.127509 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 04:51:08.127524 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 04:51:08.127538 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 04:51:08.127558 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 04:51:08.127572 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 04:51:08.127587 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 04:51:08.127601 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 04:51:08.127616 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 04:51:08.127630 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 04:51:08.127655 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 04:51:08.127677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 04:51:08.127696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 04:51:08.127711 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 04:51:08.127726 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 04:51:08.127785 systemd-journald[201]: Collecting audit messages is disabled. Dec 13 04:51:08.127823 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 04:51:08.127840 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 04:51:08.127855 systemd-journald[201]: Journal started Dec 13 04:51:08.127886 systemd-journald[201]: Runtime Journal (/run/log/journal/81472f40a0db4c78a0d8d5d930c3b68a) is 4.7M, max 38.0M, 33.2M free. Dec 13 04:51:08.105283 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 04:51:08.200893 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 04:51:08.200929 kernel: Bridge firewalling registered Dec 13 04:51:08.160699 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 04:51:08.208274 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 04:51:08.207876 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 04:51:08.210115 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 04:51:08.211788 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 04:51:08.223536 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 04:51:08.239448 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 04:51:08.249923 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 04:51:08.253562 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 04:51:08.276076 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 04:51:08.278362 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 04:51:08.281429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 04:51:08.283554 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 04:51:08.290513 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 04:51:08.293431 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 04:51:08.315258 dracut-cmdline[235]: dracut-dracut-053 Dec 13 04:51:08.318396 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 04:51:08.354305 systemd-resolved[236]: Positive Trust Anchors: Dec 13 04:51:08.354327 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 04:51:08.354373 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 04:51:08.359246 systemd-resolved[236]: Defaulting to hostname 'linux'. Dec 13 04:51:08.360964 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 04:51:08.362119 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 04:51:08.443324 kernel: SCSI subsystem initialized Dec 13 04:51:08.455282 kernel: Loading iSCSI transport class v2.0-870. Dec 13 04:51:08.469268 kernel: iscsi: registered transport (tcp) Dec 13 04:51:08.496425 kernel: iscsi: registered transport (qla4xxx) Dec 13 04:51:08.496514 kernel: QLogic iSCSI HBA Driver Dec 13 04:51:08.551099 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 04:51:08.557486 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 04:51:08.599690 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 04:51:08.599779 kernel: device-mapper: uevent: version 1.0.3 Dec 13 04:51:08.601327 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 04:51:08.652292 kernel: raid6: sse2x4 gen() 13792 MB/s Dec 13 04:51:08.670443 kernel: raid6: sse2x2 gen() 9385 MB/s Dec 13 04:51:08.688941 kernel: raid6: sse2x1 gen() 9874 MB/s Dec 13 04:51:08.689035 kernel: raid6: using algorithm sse2x4 gen() 13792 MB/s Dec 13 04:51:08.708933 kernel: raid6: .... xor() 7515 MB/s, rmw enabled Dec 13 04:51:08.709092 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 04:51:08.737342 kernel: xor: automatically using best checksumming function avx Dec 13 04:51:08.931298 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 04:51:08.947637 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 04:51:08.954558 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 04:51:08.983590 systemd-udevd[419]: Using default interface naming scheme 'v255'. Dec 13 04:51:08.990469 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 04:51:08.998597 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 04:51:09.026325 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Dec 13 04:51:09.067202 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 04:51:09.074486 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 04:51:09.184810 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 04:51:09.194578 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 04:51:09.214852 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 04:51:09.219259 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 04:51:09.221967 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 04:51:09.223295 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 04:51:09.230424 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 04:51:09.254837 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 04:51:09.294266 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 13 04:51:09.386741 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 04:51:09.386963 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 04:51:09.386986 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 04:51:09.387015 kernel: GPT:17805311 != 125829119 Dec 13 04:51:09.387033 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 04:51:09.387051 kernel: GPT:17805311 != 125829119 Dec 13 04:51:09.387068 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 04:51:09.387096 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:51:09.387116 kernel: AVX version of gcm_enc/dec engaged. Dec 13 04:51:09.387133 kernel: libata version 3.00 loaded. Dec 13 04:51:09.387151 kernel: AES CTR mode by8 optimization enabled Dec 13 04:51:09.387174 kernel: ACPI: bus type USB registered Dec 13 04:51:09.372808 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 04:51:09.392746 kernel: usbcore: registered new interface driver usbfs Dec 13 04:51:09.392773 kernel: usbcore: registered new interface driver hub Dec 13 04:51:09.372987 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 04:51:09.397541 kernel: usbcore: registered new device driver usb Dec 13 04:51:09.374016 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 04:51:09.374894 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 04:51:09.375095 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 04:51:09.376339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 04:51:09.393113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 04:51:09.425485 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (469) Dec 13 04:51:09.449275 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 04:51:09.454979 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 04:51:09.455204 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 04:51:09.456471 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 04:51:09.456686 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 04:51:09.456883 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 04:51:09.457077 kernel: hub 1-0:1.0: USB hub found Dec 13 04:51:09.460372 kernel: hub 1-0:1.0: 4 ports detected Dec 13 04:51:09.460593 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 04:51:09.460888 kernel: hub 2-0:1.0: USB hub found Dec 13 04:51:09.461121 kernel: hub 2-0:1.0: 4 ports detected Dec 13 04:51:09.465412 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (477) Dec 13 04:51:09.467828 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 04:51:09.582713 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 04:51:09.583026 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 04:51:09.583062 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 04:51:09.584133 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 04:51:09.584365 kernel: scsi host0: ahci Dec 13 04:51:09.584585 kernel: scsi host1: ahci Dec 13 04:51:09.584797 kernel: scsi host2: ahci Dec 13 04:51:09.584986 kernel: scsi host3: ahci Dec 13 04:51:09.585177 kernel: scsi host4: ahci Dec 13 04:51:09.585410 kernel: scsi host5: ahci Dec 13 04:51:09.585600 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Dec 13 04:51:09.585632 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Dec 13 04:51:09.585652 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Dec 13 04:51:09.585670 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Dec 13 04:51:09.585688 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Dec 13 04:51:09.585705 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Dec 13 04:51:09.589630 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 04:51:09.590926 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 04:51:09.610293 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 04:51:09.616397 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 04:51:09.617270 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 04:51:09.625423 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 04:51:09.630405 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 04:51:09.632720 disk-uuid[557]: Primary Header is updated. Dec 13 04:51:09.632720 disk-uuid[557]: Secondary Entries is updated. Dec 13 04:51:09.632720 disk-uuid[557]: Secondary Header is updated. Dec 13 04:51:09.642385 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:51:09.649207 kernel: GPT:disk_guids don't match. Dec 13 04:51:09.649298 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 04:51:09.649319 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:51:09.657253 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:51:09.658352 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 04:51:09.699273 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 04:51:09.811505 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 04:51:09.811577 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 04:51:09.812289 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 04:51:09.816258 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 04:51:09.819641 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 04:51:09.819703 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 04:51:09.853275 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 04:51:09.871910 kernel: usbcore: registered new interface driver usbhid Dec 13 04:51:09.871976 kernel: usbhid: USB HID core driver Dec 13 04:51:09.891032 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 13 04:51:09.891101 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 04:51:10.657288 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:51:10.658040 disk-uuid[558]: The operation has completed successfully. Dec 13 04:51:10.707350 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 04:51:10.707543 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 04:51:10.731465 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 04:51:10.741758 sh[583]: Success Dec 13 04:51:10.761312 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 04:51:10.822952 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 04:51:10.844995 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 04:51:10.846110 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 04:51:10.875467 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 04:51:10.875535 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:51:10.877614 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 04:51:10.881023 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 04:51:10.881069 kernel: BTRFS info (device dm-0): using free space tree Dec 13 04:51:10.892484 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 04:51:10.893562 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 04:51:10.898446 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 04:51:10.903574 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 04:51:10.917251 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 04:51:10.917330 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:51:10.918589 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:51:10.926251 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 04:51:10.943686 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 04:51:10.943289 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 04:51:10.952623 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 04:51:10.960458 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 04:51:11.057946 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 04:51:11.069501 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 04:51:11.104378 systemd-networkd[765]: lo: Link UP Dec 13 04:51:11.105338 systemd-networkd[765]: lo: Gained carrier Dec 13 04:51:11.107767 systemd-networkd[765]: Enumeration completed Dec 13 04:51:11.108503 ignition[677]: Ignition 2.19.0 Dec 13 04:51:11.108388 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 04:51:11.108519 ignition[677]: Stage: fetch-offline Dec 13 04:51:11.108393 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:51:11.108639 ignition[677]: no configs at "/usr/lib/ignition/base.d" Dec 13 04:51:11.110214 systemd-networkd[765]: eth0: Link UP Dec 13 04:51:11.108666 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:51:11.110219 systemd-networkd[765]: eth0: Gained carrier Dec 13 04:51:11.108880 ignition[677]: parsed url from cmdline: "" Dec 13 04:51:11.110256 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 04:51:11.108887 ignition[677]: no config URL provided Dec 13 04:51:11.110353 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 04:51:11.108897 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 04:51:11.113992 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 04:51:11.108913 ignition[677]: no config at "/usr/lib/ignition/user.ign" Dec 13 04:51:11.116125 systemd[1]: Reached target network.target - Network. Dec 13 04:51:11.108922 ignition[677]: failed to fetch config: resource requires networking Dec 13 04:51:11.123503 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 04:51:11.110671 ignition[677]: Ignition finished successfully Dec 13 04:51:11.142644 ignition[772]: Ignition 2.19.0 Dec 13 04:51:11.142665 ignition[772]: Stage: fetch Dec 13 04:51:11.142915 ignition[772]: no configs at "/usr/lib/ignition/base.d" Dec 13 04:51:11.142935 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:51:11.143064 ignition[772]: parsed url from cmdline: "" Dec 13 04:51:11.143071 ignition[772]: no config URL provided Dec 13 04:51:11.143081 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 04:51:11.143096 ignition[772]: no config at "/usr/lib/ignition/user.ign" Dec 13 04:51:11.143308 ignition[772]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 04:51:11.143403 ignition[772]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 04:51:11.143470 ignition[772]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 04:51:11.143656 ignition[772]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 04:51:11.170348 systemd-networkd[765]: eth0: DHCPv4 address 10.244.18.230/30, gateway 10.244.18.229 acquired from 10.244.18.229 Dec 13 04:51:11.343898 ignition[772]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Dec 13 04:51:11.359639 ignition[772]: GET result: OK Dec 13 04:51:11.359847 ignition[772]: parsing config with SHA512: 80a026720a98abad769c3659a4def49f232e3495a1055cd5d2875c96b907283cb51563121b54f8ae508deef6fcb1d04aef8d117cc0e41723ae8d4816cf262b67 Dec 13 04:51:11.367052 unknown[772]: fetched base config from "system" Dec 13 04:51:11.367078 unknown[772]: fetched base config from "system" Dec 13 04:51:11.367818 ignition[772]: fetch: fetch complete Dec 13 04:51:11.367088 unknown[772]: fetched user config from "openstack" Dec 13 04:51:11.367827 ignition[772]: fetch: fetch passed Dec 13 04:51:11.370663 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 04:51:11.367897 ignition[772]: Ignition finished successfully Dec 13 04:51:11.377527 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 04:51:11.400374 ignition[780]: Ignition 2.19.0 Dec 13 04:51:11.400396 ignition[780]: Stage: kargs Dec 13 04:51:11.400683 ignition[780]: no configs at "/usr/lib/ignition/base.d" Dec 13 04:51:11.400703 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:51:11.403909 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 04:51:11.402389 ignition[780]: kargs: kargs passed Dec 13 04:51:11.402466 ignition[780]: Ignition finished successfully Dec 13 04:51:11.410491 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 04:51:11.443102 ignition[787]: Ignition 2.19.0 Dec 13 04:51:11.443121 ignition[787]: Stage: disks Dec 13 04:51:11.443408 ignition[787]: no configs at "/usr/lib/ignition/base.d" Dec 13 04:51:11.446920 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 04:51:11.443429 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:51:11.449147 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 04:51:11.444708 ignition[787]: disks: disks passed Dec 13 04:51:11.451127 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 04:51:11.444787 ignition[787]: Ignition finished successfully Dec 13 04:51:11.452823 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 04:51:11.454126 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 04:51:11.455708 systemd[1]: Reached target basic.target - Basic System. Dec 13 04:51:11.465555 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 04:51:11.482868 systemd-fsck[795]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 04:51:11.490542 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 04:51:11.496361 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 04:51:11.630252 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 04:51:11.631494 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 04:51:11.632880 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 04:51:11.646372 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 04:51:11.649454 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 04:51:11.651019 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 04:51:11.657418 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 04:51:11.667639 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Dec 13 04:51:11.667673 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 04:51:11.667693 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:51:11.667711 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:51:11.668446 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 04:51:11.668500 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 04:51:11.676129 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 04:51:11.675532 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 04:51:11.681458 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 04:51:11.685902 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 04:51:11.766634 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 04:51:11.774548 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Dec 13 04:51:11.783658 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 04:51:11.791967 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 04:51:11.897532 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 04:51:11.904372 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 04:51:11.907449 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 04:51:11.917985 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 04:51:11.921330 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 04:51:11.950692 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 04:51:11.952849 ignition[922]: INFO : Ignition 2.19.0 Dec 13 04:51:11.952849 ignition[922]: INFO : Stage: mount Dec 13 04:51:11.952849 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 04:51:11.952849 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:51:11.957930 ignition[922]: INFO : mount: mount passed Dec 13 04:51:11.957930 ignition[922]: INFO : Ignition finished successfully Dec 13 04:51:11.956264 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 04:51:12.714536 systemd-networkd[765]: eth0: Gained IPv6LL Dec 13 04:51:14.224439 systemd-networkd[765]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:4b9:24:19ff:fef4:12e6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:4b9:24:19ff:fef4:12e6/64 assigned by NDisc. Dec 13 04:51:14.224455 systemd-networkd[765]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 04:51:18.830337 coreos-metadata[805]: Dec 13 04:51:18.830 WARN failed to locate config-drive, using the metadata service API instead Dec 13 04:51:18.855138 coreos-metadata[805]: Dec 13 04:51:18.855 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 04:51:18.868293 coreos-metadata[805]: Dec 13 04:51:18.868 INFO Fetch successful Dec 13 04:51:18.869253 coreos-metadata[805]: Dec 13 04:51:18.868 INFO wrote hostname srv-wy7pj.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 04:51:18.871515 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 04:51:18.871707 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 04:51:18.879416 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 04:51:18.895485 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 04:51:18.908259 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Dec 13 04:51:18.915269 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 04:51:18.915332 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:51:18.915354 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:51:18.921377 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 04:51:18.923986 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 04:51:18.953003 ignition[955]: INFO : Ignition 2.19.0 Dec 13 04:51:18.955245 ignition[955]: INFO : Stage: files Dec 13 04:51:18.955245 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 04:51:18.955245 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:51:18.957828 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Dec 13 04:51:18.959989 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 04:51:18.959989 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 04:51:18.963735 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 04:51:18.964967 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 04:51:18.966529 unknown[955]: wrote ssh authorized keys file for user: core Dec 13 04:51:18.967597 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 04:51:18.969103 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 04:51:18.970354 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 04:51:18.970354 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 04:51:18.970354 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 04:51:19.145639 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 04:51:19.434849 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 04:51:19.436547 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 04:51:19.436547 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 04:51:20.002392 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 04:51:20.663278 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 04:51:20.668628 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 04:51:20.668628 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 04:51:20.668628 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 04:51:20.674293 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 04:51:20.676486 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 04:51:20.676486 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 04:51:20.676486 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 04:51:20.676486 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 04:51:20.676486 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 04:51:20.682275 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 04:51:20.682275 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 04:51:20.682275 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 04:51:20.682275 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 04:51:20.682275 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 04:51:21.155442 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Dec 13 04:51:24.615319 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 04:51:24.615319 ignition[955]: INFO : files: op(d): [started] processing unit "containerd.service" Dec 13 04:51:24.619167 ignition[955]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 04:51:24.619167 ignition[955]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 04:51:24.619167 ignition[955]: INFO : files: op(d): [finished] processing unit "containerd.service" Dec 13 04:51:24.619167 ignition[955]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Dec 13 04:51:24.619167 ignition[955]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 04:51:24.619167 ignition[955]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 04:51:24.619167 ignition[955]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Dec 13 04:51:24.619167 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 04:51:24.619167 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 04:51:24.634329 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 04:51:24.634329 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 04:51:24.634329 ignition[955]: INFO : files: files passed Dec 13 04:51:24.634329 ignition[955]: INFO : Ignition finished successfully Dec 13 04:51:24.622896 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 04:51:24.631511 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 04:51:24.644515 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 04:51:24.654706 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 04:51:24.654885 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 04:51:24.667537 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 04:51:24.667537 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 04:51:24.671370 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 04:51:24.671705 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 04:51:24.674174 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 04:51:24.681475 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 04:51:24.715871 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 04:51:24.716064 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 04:51:24.717948 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 04:51:24.719298 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 04:51:24.721051 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 04:51:24.739642 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 04:51:24.757070 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 04:51:24.768574 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 04:51:24.785636 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 04:51:24.786665 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 04:51:24.788430 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 04:51:24.790074 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 04:51:24.790295 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 04:51:24.791717 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 04:51:24.792705 systemd[1]: Stopped target basic.target - Basic System. Dec 13 04:51:24.794099 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 04:51:24.795859 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 04:51:24.797602 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 04:51:24.799106 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 04:51:24.800710 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 04:51:24.802390 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 04:51:24.803957 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 04:51:24.805488 systemd[1]: Stopped target swap.target - Swaps. Dec 13 04:51:24.806945 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 04:51:24.807168 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 04:51:24.809313 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 04:51:24.810496 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 04:51:24.811913 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 04:51:24.812095 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 04:51:24.813521 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 04:51:24.813725 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 04:51:24.815157 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 04:51:24.815346 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 04:51:24.816334 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 04:51:24.816589 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 04:51:24.828346 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 04:51:24.833076 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 04:51:24.835443 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 04:51:24.845705 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 04:51:24.847945 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 04:51:24.849314 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 04:51:24.852959 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 04:51:24.853219 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 04:51:24.864203 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 04:51:24.864399 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 04:51:24.868319 ignition[1009]: INFO : Ignition 2.19.0 Dec 13 04:51:24.868319 ignition[1009]: INFO : Stage: umount Dec 13 04:51:24.868319 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 04:51:24.868319 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:51:24.875906 ignition[1009]: INFO : umount: umount passed Dec 13 04:51:24.875906 ignition[1009]: INFO : Ignition finished successfully Dec 13 04:51:24.869872 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 04:51:24.871161 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 04:51:24.873773 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 04:51:24.873893 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 04:51:24.875426 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 04:51:24.875516 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 04:51:24.877379 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 04:51:24.877489 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 04:51:24.878198 systemd[1]: Stopped target network.target - Network. Dec 13 04:51:24.880351 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 04:51:24.880474 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 04:51:24.882496 systemd[1]: Stopped target paths.target - Path Units. Dec 13 04:51:24.884279 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 04:51:24.889801 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 04:51:24.890926 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 04:51:24.891606 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 04:51:24.893345 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 04:51:24.893444 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 04:51:24.896946 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 04:51:24.897027 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 04:51:24.897753 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 04:51:24.897841 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 04:51:24.900653 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 04:51:24.900751 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 04:51:24.902119 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 04:51:24.903012 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 04:51:24.906895 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 04:51:24.907762 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 04:51:24.907916 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 04:51:24.911043 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 04:51:24.911165 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 04:51:24.912530 systemd-networkd[765]: eth0: DHCPv6 lease lost Dec 13 04:51:24.918526 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 04:51:24.918757 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 04:51:24.922796 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 04:51:24.923021 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 04:51:24.928451 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 04:51:24.928575 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 04:51:24.949749 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 04:51:24.950574 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 04:51:24.950674 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 04:51:24.953222 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 04:51:24.953329 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 04:51:24.957730 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 04:51:24.957833 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 04:51:24.959027 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 04:51:24.959113 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 04:51:24.963137 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 04:51:24.976869 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 04:51:24.977142 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 04:51:24.984818 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 04:51:24.984999 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 04:51:24.987125 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 04:51:24.987519 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 04:51:24.988353 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 04:51:24.988439 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 04:51:24.989177 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 04:51:24.989296 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 04:51:24.990484 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 04:51:24.990554 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 04:51:24.991526 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 04:51:24.991631 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 04:51:25.012668 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 04:51:25.013477 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 04:51:25.013569 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 04:51:25.014424 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 04:51:25.014507 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 04:51:25.033552 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 04:51:25.033861 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 04:51:25.043727 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 04:51:25.059275 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 04:51:25.071274 systemd[1]: Switching root. Dec 13 04:51:25.099616 systemd-journald[201]: Journal stopped Dec 13 04:51:26.635135 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 13 04:51:26.635291 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 04:51:26.635330 kernel: SELinux: policy capability open_perms=1 Dec 13 04:51:26.635377 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 04:51:26.635398 kernel: SELinux: policy capability always_check_network=0 Dec 13 04:51:26.635416 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 04:51:26.635453 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 04:51:26.635474 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 04:51:26.635492 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 04:51:26.635518 kernel: audit: type=1403 audit(1734065485.413:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 04:51:26.635566 systemd[1]: Successfully loaded SELinux policy in 51.382ms. Dec 13 04:51:26.635608 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.797ms. Dec 13 04:51:26.635633 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 04:51:26.635655 systemd[1]: Detected virtualization kvm. Dec 13 04:51:26.635689 systemd[1]: Detected architecture x86-64. Dec 13 04:51:26.635717 systemd[1]: Detected first boot. Dec 13 04:51:26.635738 systemd[1]: Hostname set to . Dec 13 04:51:26.635766 systemd[1]: Initializing machine ID from VM UUID. Dec 13 04:51:26.635787 zram_generator::config[1073]: No configuration found. Dec 13 04:51:26.635820 systemd[1]: Populated /etc with preset unit settings. Dec 13 04:51:26.635843 systemd[1]: Queued start job for default target multi-user.target. Dec 13 04:51:26.635864 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 04:51:26.635902 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 04:51:26.635926 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 04:51:26.635947 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 04:51:26.635974 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 04:51:26.635996 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 04:51:26.636017 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 04:51:26.636046 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 04:51:26.636075 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 04:51:26.636097 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 04:51:26.636138 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 04:51:26.636160 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 04:51:26.636182 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 04:51:26.636203 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 04:51:26.636224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 04:51:26.636633 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 04:51:26.636658 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 04:51:26.636710 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 04:51:26.636733 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 04:51:26.636754 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 04:51:26.636774 systemd[1]: Reached target slices.target - Slice Units. Dec 13 04:51:26.636802 systemd[1]: Reached target swap.target - Swaps. Dec 13 04:51:26.636845 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 04:51:26.636867 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 04:51:26.636888 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 04:51:26.636909 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 04:51:26.636936 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 04:51:26.636958 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 04:51:26.636979 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 04:51:26.636999 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 04:51:26.637020 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 04:51:26.637040 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 04:51:26.637081 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 04:51:26.637105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:51:26.637126 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 04:51:26.637154 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 04:51:26.637185 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 04:51:26.637206 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 04:51:26.637248 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 04:51:26.637273 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 04:51:26.637308 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 04:51:26.637331 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 04:51:26.637371 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 04:51:26.637394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 04:51:26.637414 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 04:51:26.637435 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 04:51:26.637469 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 04:51:26.637492 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 04:51:26.637532 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 04:51:26.637554 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 04:51:26.637575 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 04:51:26.637596 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 04:51:26.637617 kernel: ACPI: bus type drm_connector registered Dec 13 04:51:26.637648 kernel: fuse: init (API version 7.39) Dec 13 04:51:26.637675 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 04:51:26.637741 systemd-journald[1180]: Collecting audit messages is disabled. Dec 13 04:51:26.637795 kernel: loop: module loaded Dec 13 04:51:26.637819 systemd-journald[1180]: Journal started Dec 13 04:51:26.637851 systemd-journald[1180]: Runtime Journal (/run/log/journal/81472f40a0db4c78a0d8d5d930c3b68a) is 4.7M, max 38.0M, 33.2M free. Dec 13 04:51:26.644299 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 04:51:26.658136 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:51:26.661258 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 04:51:26.666466 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 04:51:26.669606 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 04:51:26.671476 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 04:51:26.672470 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 04:51:26.674449 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 04:51:26.677544 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 04:51:26.678842 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 04:51:26.680769 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 04:51:26.682006 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 04:51:26.682568 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 04:51:26.683777 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:51:26.684027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 04:51:26.685442 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 04:51:26.685694 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 04:51:26.687165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:51:26.687448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 04:51:26.688941 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 04:51:26.689176 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 04:51:26.690410 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:51:26.692761 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 04:51:26.696137 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 04:51:26.697786 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 04:51:26.700668 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 04:51:26.716110 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 04:51:26.723415 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 04:51:26.735994 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 04:51:26.737374 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 04:51:26.749586 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 04:51:26.762505 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 04:51:26.763544 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:51:26.770453 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 04:51:26.774424 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 04:51:26.784992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 04:51:26.788443 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 04:51:26.793467 systemd-journald[1180]: Time spent on flushing to /var/log/journal/81472f40a0db4c78a0d8d5d930c3b68a is 45.520ms for 1133 entries. Dec 13 04:51:26.793467 systemd-journald[1180]: System Journal (/var/log/journal/81472f40a0db4c78a0d8d5d930c3b68a) is 8.0M, max 584.8M, 576.8M free. Dec 13 04:51:26.872790 systemd-journald[1180]: Received client request to flush runtime journal. Dec 13 04:51:26.795114 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 04:51:26.802389 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 04:51:26.817045 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 04:51:26.818059 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 04:51:26.866928 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 04:51:26.875193 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 04:51:26.883179 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Dec 13 04:51:26.883204 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Dec 13 04:51:26.892049 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 04:51:26.904512 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 04:51:26.940430 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 04:51:26.953444 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 04:51:26.977818 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 04:51:26.987840 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 04:51:26.989351 udevadm[1244]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 04:51:27.016809 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Dec 13 04:51:27.016838 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Dec 13 04:51:27.023817 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 04:51:27.561624 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 04:51:27.570671 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 04:51:27.621606 systemd-udevd[1253]: Using default interface naming scheme 'v255'. Dec 13 04:51:27.651991 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 04:51:27.665420 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 04:51:27.695523 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 04:51:27.767261 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1255) Dec 13 04:51:27.778347 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1255) Dec 13 04:51:27.778805 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 04:51:27.788972 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 04:51:27.817554 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1268) Dec 13 04:51:27.873878 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 04:51:27.946889 systemd-networkd[1257]: lo: Link UP Dec 13 04:51:27.947270 systemd-networkd[1257]: lo: Gained carrier Dec 13 04:51:27.951090 systemd-networkd[1257]: Enumeration completed Dec 13 04:51:27.951686 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 04:51:27.952111 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 04:51:27.952118 systemd-networkd[1257]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:51:27.955551 systemd-networkd[1257]: eth0: Link UP Dec 13 04:51:27.955564 systemd-networkd[1257]: eth0: Gained carrier Dec 13 04:51:27.955582 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 04:51:27.960764 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 04:51:27.974347 systemd-networkd[1257]: eth0: DHCPv4 address 10.244.18.230/30, gateway 10.244.18.229 acquired from 10.244.18.229 Dec 13 04:51:27.989274 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 04:51:28.006277 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 13 04:51:28.021266 kernel: ACPI: button: Power Button [PWRF] Dec 13 04:51:28.077308 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 04:51:28.091464 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 04:51:28.091802 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 04:51:28.108499 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 04:51:28.129854 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 04:51:28.324271 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 04:51:28.334979 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 04:51:28.342507 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 04:51:28.366382 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 04:51:28.400717 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 04:51:28.402575 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 04:51:28.416501 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 04:51:28.423251 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 04:51:28.454636 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 04:51:28.456487 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 04:51:28.457430 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 04:51:28.457609 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 04:51:28.458654 systemd[1]: Reached target machines.target - Containers. Dec 13 04:51:28.461208 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 04:51:28.467454 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 04:51:28.470752 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 04:51:28.472856 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 04:51:28.476481 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 04:51:28.491569 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 04:51:28.497435 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 04:51:28.500671 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 04:51:28.531133 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 04:51:28.537484 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 04:51:28.544983 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 04:51:28.555283 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 04:51:28.582275 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 04:51:28.622334 kernel: loop1: detected capacity change from 0 to 8 Dec 13 04:51:28.651286 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 04:51:28.694556 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 04:51:28.741269 kernel: loop4: detected capacity change from 0 to 211296 Dec 13 04:51:28.766456 kernel: loop5: detected capacity change from 0 to 8 Dec 13 04:51:28.771375 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 04:51:28.800284 kernel: loop7: detected capacity change from 0 to 140768 Dec 13 04:51:28.827223 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 04:51:28.830086 (sd-merge)[1317]: Merged extensions into '/usr'. Dec 13 04:51:28.834657 systemd[1]: Reloading requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 04:51:28.834704 systemd[1]: Reloading... Dec 13 04:51:28.926332 zram_generator::config[1345]: No configuration found. Dec 13 04:51:29.150842 ldconfig[1300]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 04:51:29.160922 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:51:29.254964 systemd[1]: Reloading finished in 419 ms. Dec 13 04:51:29.279014 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 04:51:29.280582 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 04:51:29.296627 systemd[1]: Starting ensure-sysext.service... Dec 13 04:51:29.300483 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 04:51:29.309455 systemd[1]: Reloading requested from client PID 1408 ('systemctl') (unit ensure-sysext.service)... Dec 13 04:51:29.309485 systemd[1]: Reloading... Dec 13 04:51:29.355001 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 04:51:29.356355 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 04:51:29.358058 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 04:51:29.358651 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Dec 13 04:51:29.358915 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Dec 13 04:51:29.364411 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 04:51:29.364606 systemd-tmpfiles[1409]: Skipping /boot Dec 13 04:51:29.382723 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 04:51:29.382938 systemd-tmpfiles[1409]: Skipping /boot Dec 13 04:51:29.405344 zram_generator::config[1437]: No configuration found. Dec 13 04:51:29.606220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:51:29.689268 systemd[1]: Reloading finished in 379 ms. Dec 13 04:51:29.717971 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 04:51:29.725482 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 04:51:29.736580 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 04:51:29.742030 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 04:51:29.753545 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 04:51:29.763650 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 04:51:29.780739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:51:29.781037 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 04:51:29.783731 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 04:51:29.787757 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 04:51:29.803186 systemd-networkd[1257]: eth0: Gained IPv6LL Dec 13 04:51:29.807202 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 04:51:29.809462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 04:51:29.809628 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:51:29.823426 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 04:51:29.831540 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 04:51:29.837199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:51:29.838547 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 04:51:29.843135 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:51:29.845487 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 04:51:29.849360 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:51:29.851460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 04:51:29.868979 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:51:29.870419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 04:51:29.882152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 04:51:29.894606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 04:51:29.900508 augenrules[1539]: No rules Dec 13 04:51:29.902664 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 04:51:29.908423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 04:51:29.926277 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 04:51:29.929339 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:51:29.935025 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 04:51:29.938414 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 04:51:29.939942 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 04:51:29.942430 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:51:29.942683 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 04:51:29.945142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:51:29.945428 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 04:51:29.946916 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:51:29.948695 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 04:51:29.963559 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:51:29.963910 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 04:51:29.970449 systemd-resolved[1506]: Positive Trust Anchors: Dec 13 04:51:29.970662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 04:51:29.971120 systemd-resolved[1506]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 04:51:29.971283 systemd-resolved[1506]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 04:51:29.977681 systemd-resolved[1506]: Using system hostname 'srv-wy7pj.gb1.brightbox.com'. Dec 13 04:51:29.983597 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 04:51:29.994675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 04:51:29.998619 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 04:51:30.000596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 04:51:30.000947 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 04:51:30.001379 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:51:30.010573 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 04:51:30.013680 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 04:51:30.015259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:51:30.015656 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 04:51:30.017470 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 04:51:30.017810 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 04:51:30.019558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:51:30.019910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 04:51:30.021659 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:51:30.022068 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 04:51:30.026967 systemd[1]: Finished ensure-sysext.service. Dec 13 04:51:30.035778 systemd[1]: Reached target network.target - Network. Dec 13 04:51:30.036728 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 04:51:30.037473 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 04:51:30.038308 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:51:30.038416 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 04:51:30.043494 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 04:51:30.129202 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 04:51:30.131128 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 04:51:30.132213 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 04:51:30.133203 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 04:51:30.134195 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 04:51:30.135037 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 04:51:30.135099 systemd[1]: Reached target paths.target - Path Units. Dec 13 04:51:30.135779 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 04:51:30.136764 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 04:51:30.137647 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 04:51:30.138494 systemd[1]: Reached target timers.target - Timer Units. Dec 13 04:51:30.140804 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 04:51:30.143975 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 04:51:30.147477 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 04:51:30.148748 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 04:51:30.149553 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 04:51:30.150315 systemd[1]: Reached target basic.target - Basic System. Dec 13 04:51:30.151278 systemd[1]: System is tainted: cgroupsv1 Dec 13 04:51:30.151352 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 04:51:30.151392 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 04:51:30.154573 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 04:51:30.158477 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 04:51:30.164015 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 04:51:30.173381 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 04:51:30.195598 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 04:51:30.196447 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 04:51:30.206517 jq[1584]: false Dec 13 04:51:30.211389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:51:30.209867 dbus-daemon[1582]: [system] SELinux support is enabled Dec 13 04:51:30.214783 dbus-daemon[1582]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1257 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 04:51:30.219114 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 04:51:30.233518 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 04:51:30.241369 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 04:51:30.256253 extend-filesystems[1587]: Found loop4 Dec 13 04:51:30.256253 extend-filesystems[1587]: Found loop5 Dec 13 04:51:30.256253 extend-filesystems[1587]: Found loop6 Dec 13 04:51:30.256253 extend-filesystems[1587]: Found loop7 Dec 13 04:51:30.256253 extend-filesystems[1587]: Found vda Dec 13 04:51:30.256253 extend-filesystems[1587]: Found vda1 Dec 13 04:51:30.256253 extend-filesystems[1587]: Found vda2 Dec 13 04:51:30.256253 extend-filesystems[1587]: Found vda3 Dec 13 04:51:30.256253 extend-filesystems[1587]: Found usr Dec 13 04:51:30.256253 extend-filesystems[1587]: Found vda4 Dec 13 04:51:30.256253 extend-filesystems[1587]: Found vda6 Dec 13 04:51:30.256253 extend-filesystems[1587]: Found vda7 Dec 13 04:51:30.256253 extend-filesystems[1587]: Found vda9 Dec 13 04:51:30.256253 extend-filesystems[1587]: Checking size of /dev/vda9 Dec 13 04:51:30.336087 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1256) Dec 13 04:51:30.253353 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 04:51:30.338384 extend-filesystems[1587]: Resized partition /dev/vda9 Dec 13 04:51:30.268115 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 04:51:30.341411 extend-filesystems[1618]: resize2fs 1.47.1 (20-May-2024) Dec 13 04:51:30.358389 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 04:51:30.285153 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 04:51:30.287431 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 04:51:30.303471 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 04:51:30.335383 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 04:51:30.345824 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 04:51:30.361720 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 04:51:30.362121 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 04:51:30.366131 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 04:51:30.373323 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 04:51:30.378079 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 04:51:30.386850 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 04:51:30.387212 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 04:51:30.404842 jq[1619]: true Dec 13 04:51:30.452045 (ntainerd)[1627]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 04:51:30.477388 dbus-daemon[1582]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 04:51:30.484274 update_engine[1612]: I20241213 04:51:30.481631 1612 main.cc:92] Flatcar Update Engine starting Dec 13 04:51:30.484743 tar[1626]: linux-amd64/helm Dec 13 04:51:30.498102 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 04:51:30.508329 jq[1629]: true Dec 13 04:51:30.499370 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 04:51:30.514551 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 04:51:30.520600 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 04:51:30.520661 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 04:51:30.528884 update_engine[1612]: I20241213 04:51:30.528755 1612 update_check_scheduler.cc:74] Next update check in 4m56s Dec 13 04:51:30.554518 systemd[1]: Started update-engine.service - Update Engine. Dec 13 04:51:30.558512 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 04:51:30.565564 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 04:51:31.005221 systemd-resolved[1506]: Clock change detected. Flushing caches. Dec 13 04:51:31.005542 systemd-timesyncd[1576]: Contacted time server 217.114.59.3:123 (0.flatcar.pool.ntp.org). Dec 13 04:51:31.005967 systemd-timesyncd[1576]: Initial clock synchronization to Fri 2024-12-13 04:51:31.004796 UTC. Dec 13 04:51:31.078828 systemd-logind[1609]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 04:51:31.078883 systemd-logind[1609]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 04:51:31.079852 systemd-logind[1609]: New seat seat0. Dec 13 04:51:31.083624 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 04:51:31.227916 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 04:51:31.228027 bash[1660]: Updated "/home/core/.ssh/authorized_keys" Dec 13 04:51:31.232382 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 04:51:31.251107 systemd[1]: Starting sshkeys.service... Dec 13 04:51:31.264215 extend-filesystems[1618]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 04:51:31.264215 extend-filesystems[1618]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 04:51:31.264215 extend-filesystems[1618]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 04:51:31.286977 extend-filesystems[1587]: Resized filesystem in /dev/vda9 Dec 13 04:51:31.267608 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 04:51:31.268011 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 04:51:31.335116 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 04:51:31.347360 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 04:51:31.370075 containerd[1627]: time="2024-12-13T04:51:31.369951993Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 04:51:31.417280 locksmithd[1646]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 04:51:31.438330 dbus-daemon[1582]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 04:51:31.438533 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 04:51:31.441901 dbus-daemon[1582]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1642 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 04:51:31.452995 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 04:51:31.474440 containerd[1627]: time="2024-12-13T04:51:31.474315233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:51:31.485995 containerd[1627]: time="2024-12-13T04:51:31.485938416Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:51:31.485995 containerd[1627]: time="2024-12-13T04:51:31.485992995Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 04:51:31.486166 containerd[1627]: time="2024-12-13T04:51:31.486019788Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 04:51:31.486314 containerd[1627]: time="2024-12-13T04:51:31.486286762Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 04:51:31.486378 containerd[1627]: time="2024-12-13T04:51:31.486321390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 04:51:31.486454 containerd[1627]: time="2024-12-13T04:51:31.486426065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:51:31.486497 containerd[1627]: time="2024-12-13T04:51:31.486459537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:51:31.486787 containerd[1627]: time="2024-12-13T04:51:31.486732121Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:51:31.486787 containerd[1627]: time="2024-12-13T04:51:31.486780149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 04:51:31.486921 containerd[1627]: time="2024-12-13T04:51:31.486816633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:51:31.486921 containerd[1627]: time="2024-12-13T04:51:31.486837440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 04:51:31.487003 containerd[1627]: time="2024-12-13T04:51:31.486957450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:51:31.489742 containerd[1627]: time="2024-12-13T04:51:31.487303876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:51:31.489742 containerd[1627]: time="2024-12-13T04:51:31.487488051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:51:31.489742 containerd[1627]: time="2024-12-13T04:51:31.487513240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 04:51:31.489742 containerd[1627]: time="2024-12-13T04:51:31.487630675Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 04:51:31.489742 containerd[1627]: time="2024-12-13T04:51:31.487710146Z" level=info msg="metadata content store policy set" policy=shared Dec 13 04:51:31.488441 polkitd[1688]: Started polkitd version 121 Dec 13 04:51:31.499988 containerd[1627]: time="2024-12-13T04:51:31.499937993Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 04:51:31.500094 containerd[1627]: time="2024-12-13T04:51:31.500035528Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 04:51:31.500094 containerd[1627]: time="2024-12-13T04:51:31.500067798Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 04:51:31.500160 containerd[1627]: time="2024-12-13T04:51:31.500092888Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 04:51:31.500160 containerd[1627]: time="2024-12-13T04:51:31.500123930Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.500349301Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.500788698Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.500975611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.501002637Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.501024213Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.501045815Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.501073896Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.501100218Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.501122321Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.501170534Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.501204304Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.501222355Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.501255321Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 04:51:31.501546 containerd[1627]: time="2024-12-13T04:51:31.501291343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501315693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501334922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501355051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501373543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501394299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501412189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501430804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501449731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501470519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501491709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501510216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501552036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501578959Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501615269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502078 containerd[1627]: time="2024-12-13T04:51:31.501637661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.502660 containerd[1627]: time="2024-12-13T04:51:31.501662911Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 04:51:31.506405 containerd[1627]: time="2024-12-13T04:51:31.504905253Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 04:51:31.506405 containerd[1627]: time="2024-12-13T04:51:31.504988017Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 04:51:31.506405 containerd[1627]: time="2024-12-13T04:51:31.505014575Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 04:51:31.506405 containerd[1627]: time="2024-12-13T04:51:31.505036721Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 04:51:31.506405 containerd[1627]: time="2024-12-13T04:51:31.505054974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.506405 containerd[1627]: time="2024-12-13T04:51:31.505091553Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 04:51:31.506405 containerd[1627]: time="2024-12-13T04:51:31.505110613Z" level=info msg="NRI interface is disabled by configuration." Dec 13 04:51:31.506405 containerd[1627]: time="2024-12-13T04:51:31.505128142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 04:51:31.506769 containerd[1627]: time="2024-12-13T04:51:31.505508662Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 04:51:31.506769 containerd[1627]: time="2024-12-13T04:51:31.505591114Z" level=info msg="Connect containerd service" Dec 13 04:51:31.506769 containerd[1627]: time="2024-12-13T04:51:31.505649528Z" level=info msg="using legacy CRI server" Dec 13 04:51:31.506769 containerd[1627]: time="2024-12-13T04:51:31.505666395Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 04:51:31.506769 containerd[1627]: time="2024-12-13T04:51:31.505897672Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 04:51:31.506769 containerd[1627]: time="2024-12-13T04:51:31.506618833Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:51:31.507193 polkitd[1688]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 04:51:31.507312 polkitd[1688]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 04:51:31.510362 polkitd[1688]: Finished loading, compiling and executing 2 rules Dec 13 04:51:31.511923 dbus-daemon[1582]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 04:51:31.516562 containerd[1627]: time="2024-12-13T04:51:31.512287513Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 04:51:31.516562 containerd[1627]: time="2024-12-13T04:51:31.512376354Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 04:51:31.516562 containerd[1627]: time="2024-12-13T04:51:31.512503860Z" level=info msg="Start subscribing containerd event" Dec 13 04:51:31.516562 containerd[1627]: time="2024-12-13T04:51:31.512566903Z" level=info msg="Start recovering state" Dec 13 04:51:31.516562 containerd[1627]: time="2024-12-13T04:51:31.512671335Z" level=info msg="Start event monitor" Dec 13 04:51:31.516562 containerd[1627]: time="2024-12-13T04:51:31.512690867Z" level=info msg="Start snapshots syncer" Dec 13 04:51:31.516562 containerd[1627]: time="2024-12-13T04:51:31.512713822Z" level=info msg="Start cni network conf syncer for default" Dec 13 04:51:31.516562 containerd[1627]: time="2024-12-13T04:51:31.512728773Z" level=info msg="Start streaming server" Dec 13 04:51:31.516562 containerd[1627]: time="2024-12-13T04:51:31.512872143Z" level=info msg="containerd successfully booted in 0.150538s" Dec 13 04:51:31.512163 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 04:51:31.513945 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 04:51:31.520175 polkitd[1688]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 04:51:31.556170 systemd-hostnamed[1642]: Hostname set to (static) Dec 13 04:51:31.570083 systemd-networkd[1257]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:4b9:24:19ff:fef4:12e6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:4b9:24:19ff:fef4:12e6/64 assigned by NDisc. Dec 13 04:51:31.570096 systemd-networkd[1257]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 04:51:31.720654 sshd_keygen[1621]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 04:51:31.792659 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 04:51:31.814121 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 04:51:31.842916 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 04:51:31.843285 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 04:51:31.859546 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 04:51:31.885314 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 04:51:31.900902 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 04:51:31.912961 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 04:51:31.915928 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 04:51:32.079139 tar[1626]: linux-amd64/LICENSE Dec 13 04:51:32.079656 tar[1626]: linux-amd64/README.md Dec 13 04:51:32.096458 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 04:51:32.284039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:51:32.305725 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 04:51:33.043977 kubelet[1733]: E1213 04:51:33.043758 1733 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:51:33.046164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:51:33.046490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:51:33.256267 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 04:51:33.270287 systemd[1]: Started sshd@0-10.244.18.230:22-147.75.109.163:38610.service - OpenSSH per-connection server daemon (147.75.109.163:38610). Dec 13 04:51:34.158302 sshd[1743]: Accepted publickey for core from 147.75.109.163 port 38610 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:51:34.161347 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:51:34.184392 systemd-logind[1609]: New session 1 of user core. Dec 13 04:51:34.186098 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 04:51:34.201375 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 04:51:34.224821 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 04:51:34.236711 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 04:51:34.256477 (systemd)[1750]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:51:34.387735 systemd[1750]: Queued start job for default target default.target. Dec 13 04:51:34.388793 systemd[1750]: Created slice app.slice - User Application Slice. Dec 13 04:51:34.388829 systemd[1750]: Reached target paths.target - Paths. Dec 13 04:51:34.388851 systemd[1750]: Reached target timers.target - Timers. Dec 13 04:51:34.400940 systemd[1750]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 04:51:34.410072 systemd[1750]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 04:51:34.410819 systemd[1750]: Reached target sockets.target - Sockets. Dec 13 04:51:34.410845 systemd[1750]: Reached target basic.target - Basic System. Dec 13 04:51:34.410925 systemd[1750]: Reached target default.target - Main User Target. Dec 13 04:51:34.410996 systemd[1750]: Startup finished in 145ms. Dec 13 04:51:34.411863 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 04:51:34.422861 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 04:51:35.061903 systemd[1]: Started sshd@1-10.244.18.230:22-147.75.109.163:38620.service - OpenSSH per-connection server daemon (147.75.109.163:38620). Dec 13 04:51:35.939937 sshd[1763]: Accepted publickey for core from 147.75.109.163 port 38620 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:51:35.942073 sshd[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:51:35.950219 systemd-logind[1609]: New session 2 of user core. Dec 13 04:51:35.960302 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 04:51:36.559609 sshd[1763]: pam_unix(sshd:session): session closed for user core Dec 13 04:51:36.568924 systemd[1]: sshd@1-10.244.18.230:22-147.75.109.163:38620.service: Deactivated successfully. Dec 13 04:51:36.573037 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 04:51:36.574528 systemd-logind[1609]: Session 2 logged out. Waiting for processes to exit. Dec 13 04:51:36.576001 systemd-logind[1609]: Removed session 2. Dec 13 04:51:36.710407 systemd[1]: Started sshd@2-10.244.18.230:22-147.75.109.163:40034.service - OpenSSH per-connection server daemon (147.75.109.163:40034). Dec 13 04:51:36.958978 login[1719]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 04:51:36.964266 login[1718]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 04:51:36.966873 systemd-logind[1609]: New session 3 of user core. Dec 13 04:51:36.975844 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 04:51:36.980841 systemd-logind[1609]: New session 4 of user core. Dec 13 04:51:36.985901 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 04:51:37.595888 sshd[1771]: Accepted publickey for core from 147.75.109.163 port 40034 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:51:37.598109 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:51:37.604965 systemd-logind[1609]: New session 5 of user core. Dec 13 04:51:37.621501 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 04:51:37.711531 coreos-metadata[1581]: Dec 13 04:51:37.711 WARN failed to locate config-drive, using the metadata service API instead Dec 13 04:51:37.741311 coreos-metadata[1581]: Dec 13 04:51:37.741 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 04:51:37.748162 coreos-metadata[1581]: Dec 13 04:51:37.748 INFO Fetch failed with 404: resource not found Dec 13 04:51:37.748162 coreos-metadata[1581]: Dec 13 04:51:37.748 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 04:51:37.749198 coreos-metadata[1581]: Dec 13 04:51:37.749 INFO Fetch successful Dec 13 04:51:37.749519 coreos-metadata[1581]: Dec 13 04:51:37.749 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 04:51:37.762330 coreos-metadata[1581]: Dec 13 04:51:37.762 INFO Fetch successful Dec 13 04:51:37.762560 coreos-metadata[1581]: Dec 13 04:51:37.762 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 04:51:37.776898 coreos-metadata[1581]: Dec 13 04:51:37.776 INFO Fetch successful Dec 13 04:51:37.777195 coreos-metadata[1581]: Dec 13 04:51:37.777 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 04:51:37.795108 coreos-metadata[1581]: Dec 13 04:51:37.795 INFO Fetch successful Dec 13 04:51:37.795389 coreos-metadata[1581]: Dec 13 04:51:37.795 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 04:51:37.811836 coreos-metadata[1581]: Dec 13 04:51:37.811 INFO Fetch successful Dec 13 04:51:37.840037 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 04:51:37.841868 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 04:51:38.216654 sshd[1771]: pam_unix(sshd:session): session closed for user core Dec 13 04:51:38.220951 systemd[1]: sshd@2-10.244.18.230:22-147.75.109.163:40034.service: Deactivated successfully. Dec 13 04:51:38.225369 systemd-logind[1609]: Session 5 logged out. Waiting for processes to exit. Dec 13 04:51:38.226407 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 04:51:38.228405 systemd-logind[1609]: Removed session 5. Dec 13 04:51:38.468102 coreos-metadata[1680]: Dec 13 04:51:38.467 WARN failed to locate config-drive, using the metadata service API instead Dec 13 04:51:38.490169 coreos-metadata[1680]: Dec 13 04:51:38.490 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 04:51:38.513394 coreos-metadata[1680]: Dec 13 04:51:38.513 INFO Fetch successful Dec 13 04:51:38.513394 coreos-metadata[1680]: Dec 13 04:51:38.513 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 04:51:38.548035 coreos-metadata[1680]: Dec 13 04:51:38.547 INFO Fetch successful Dec 13 04:51:38.549883 unknown[1680]: wrote ssh authorized keys file for user: core Dec 13 04:51:38.576892 update-ssh-keys[1821]: Updated "/home/core/.ssh/authorized_keys" Dec 13 04:51:38.572433 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 04:51:38.578680 systemd[1]: Finished sshkeys.service. Dec 13 04:51:38.585122 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 04:51:38.585495 systemd[1]: Startup finished in 19.143s (kernel) + 12.815s (userspace) = 31.958s. Dec 13 04:51:43.131877 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 04:51:43.147127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:51:43.345041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:51:43.358564 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 04:51:43.435124 kubelet[1838]: E1213 04:51:43.434725 1838 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:51:43.439299 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:51:43.439844 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:51:48.373374 systemd[1]: Started sshd@3-10.244.18.230:22-147.75.109.163:43648.service - OpenSSH per-connection server daemon (147.75.109.163:43648). Dec 13 04:51:49.255224 sshd[1848]: Accepted publickey for core from 147.75.109.163 port 43648 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:51:49.258298 sshd[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:51:49.271434 systemd-logind[1609]: New session 6 of user core. Dec 13 04:51:49.277170 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 04:51:49.874334 sshd[1848]: pam_unix(sshd:session): session closed for user core Dec 13 04:51:49.879244 systemd[1]: sshd@3-10.244.18.230:22-147.75.109.163:43648.service: Deactivated successfully. Dec 13 04:51:49.879402 systemd-logind[1609]: Session 6 logged out. Waiting for processes to exit. Dec 13 04:51:49.885491 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 04:51:49.886472 systemd-logind[1609]: Removed session 6. Dec 13 04:51:50.037248 systemd[1]: Started sshd@4-10.244.18.230:22-147.75.109.163:43656.service - OpenSSH per-connection server daemon (147.75.109.163:43656). Dec 13 04:51:50.927107 sshd[1856]: Accepted publickey for core from 147.75.109.163 port 43656 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:51:50.929120 sshd[1856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:51:50.936798 systemd-logind[1609]: New session 7 of user core. Dec 13 04:51:50.942247 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 04:51:51.542094 sshd[1856]: pam_unix(sshd:session): session closed for user core Dec 13 04:51:51.545817 systemd[1]: sshd@4-10.244.18.230:22-147.75.109.163:43656.service: Deactivated successfully. Dec 13 04:51:51.550093 systemd-logind[1609]: Session 7 logged out. Waiting for processes to exit. Dec 13 04:51:51.551918 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 04:51:51.553337 systemd-logind[1609]: Removed session 7. Dec 13 04:51:51.696280 systemd[1]: Started sshd@5-10.244.18.230:22-147.75.109.163:43670.service - OpenSSH per-connection server daemon (147.75.109.163:43670). Dec 13 04:51:52.593501 sshd[1864]: Accepted publickey for core from 147.75.109.163 port 43670 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:51:52.595479 sshd[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:51:52.602262 systemd-logind[1609]: New session 8 of user core. Dec 13 04:51:52.608379 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 04:51:53.214124 sshd[1864]: pam_unix(sshd:session): session closed for user core Dec 13 04:51:53.220234 systemd[1]: sshd@5-10.244.18.230:22-147.75.109.163:43670.service: Deactivated successfully. Dec 13 04:51:53.224336 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 04:51:53.225330 systemd-logind[1609]: Session 8 logged out. Waiting for processes to exit. Dec 13 04:51:53.227399 systemd-logind[1609]: Removed session 8. Dec 13 04:51:53.369318 systemd[1]: Started sshd@6-10.244.18.230:22-147.75.109.163:43676.service - OpenSSH per-connection server daemon (147.75.109.163:43676). Dec 13 04:51:53.631754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 04:51:53.644380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:51:53.767000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:51:53.774604 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 04:51:53.867360 kubelet[1886]: E1213 04:51:53.867269 1886 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:51:53.869600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:51:53.869939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:51:54.246882 sshd[1872]: Accepted publickey for core from 147.75.109.163 port 43676 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:51:54.248919 sshd[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:51:54.255698 systemd-logind[1609]: New session 9 of user core. Dec 13 04:51:54.265278 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 04:51:54.736300 sudo[1897]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 04:51:54.736921 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 04:51:54.754584 sudo[1897]: pam_unix(sudo:session): session closed for user root Dec 13 04:51:54.898223 sshd[1872]: pam_unix(sshd:session): session closed for user core Dec 13 04:51:54.904686 systemd[1]: sshd@6-10.244.18.230:22-147.75.109.163:43676.service: Deactivated successfully. Dec 13 04:51:54.904866 systemd-logind[1609]: Session 9 logged out. Waiting for processes to exit. Dec 13 04:51:54.909375 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 04:51:54.911000 systemd-logind[1609]: Removed session 9. Dec 13 04:51:55.052180 systemd[1]: Started sshd@7-10.244.18.230:22-147.75.109.163:43678.service - OpenSSH per-connection server daemon (147.75.109.163:43678). Dec 13 04:51:55.935222 sshd[1902]: Accepted publickey for core from 147.75.109.163 port 43678 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:51:55.937640 sshd[1902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:51:55.945114 systemd-logind[1609]: New session 10 of user core. Dec 13 04:51:55.952288 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 04:51:56.415011 sudo[1907]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 04:51:56.416196 sudo[1907]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 04:51:56.422955 sudo[1907]: pam_unix(sudo:session): session closed for user root Dec 13 04:51:56.431701 sudo[1906]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 04:51:56.432846 sudo[1906]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 04:51:56.452147 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 04:51:56.457685 auditctl[1910]: No rules Dec 13 04:51:56.459423 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 04:51:56.460298 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 04:51:56.467391 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 04:51:56.504138 augenrules[1929]: No rules Dec 13 04:51:56.505752 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 04:51:56.509075 sudo[1906]: pam_unix(sudo:session): session closed for user root Dec 13 04:51:56.655386 sshd[1902]: pam_unix(sshd:session): session closed for user core Dec 13 04:51:56.660383 systemd[1]: sshd@7-10.244.18.230:22-147.75.109.163:43678.service: Deactivated successfully. Dec 13 04:51:56.665835 systemd-logind[1609]: Session 10 logged out. Waiting for processes to exit. Dec 13 04:51:56.667067 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 04:51:56.669670 systemd-logind[1609]: Removed session 10. Dec 13 04:51:56.810249 systemd[1]: Started sshd@8-10.244.18.230:22-147.75.109.163:35220.service - OpenSSH per-connection server daemon (147.75.109.163:35220). Dec 13 04:51:57.698014 sshd[1938]: Accepted publickey for core from 147.75.109.163 port 35220 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:51:57.700386 sshd[1938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:51:57.706784 systemd-logind[1609]: New session 11 of user core. Dec 13 04:51:57.719508 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 04:51:58.178807 sudo[1942]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 04:51:58.179303 sudo[1942]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 04:51:58.641217 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 04:51:58.653750 (dockerd)[1958]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 04:51:59.099931 dockerd[1958]: time="2024-12-13T04:51:59.098571710Z" level=info msg="Starting up" Dec 13 04:51:59.375052 dockerd[1958]: time="2024-12-13T04:51:59.374819015Z" level=info msg="Loading containers: start." Dec 13 04:51:59.551805 kernel: Initializing XFRM netlink socket Dec 13 04:51:59.679574 systemd-networkd[1257]: docker0: Link UP Dec 13 04:51:59.730673 dockerd[1958]: time="2024-12-13T04:51:59.730606681Z" level=info msg="Loading containers: done." Dec 13 04:51:59.751829 dockerd[1958]: time="2024-12-13T04:51:59.750460729Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 04:51:59.751829 dockerd[1958]: time="2024-12-13T04:51:59.750608691Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 04:51:59.752048 dockerd[1958]: time="2024-12-13T04:51:59.750757267Z" level=info msg="Daemon has completed initialization" Dec 13 04:51:59.753024 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck810118162-merged.mount: Deactivated successfully. Dec 13 04:51:59.794043 dockerd[1958]: time="2024-12-13T04:51:59.793931170Z" level=info msg="API listen on /run/docker.sock" Dec 13 04:51:59.794469 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 04:52:01.112476 containerd[1627]: time="2024-12-13T04:52:01.112377709Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 04:52:01.629617 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 04:52:01.938582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2474427645.mount: Deactivated successfully. Dec 13 04:52:03.882727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 04:52:03.898537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:52:04.137004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:52:04.147221 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 04:52:04.262789 kubelet[2177]: E1213 04:52:04.261641 2177 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:52:04.265336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:52:04.265674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:52:04.458895 containerd[1627]: time="2024-12-13T04:52:04.457125647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:04.458895 containerd[1627]: time="2024-12-13T04:52:04.458513896Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Dec 13 04:52:04.459860 containerd[1627]: time="2024-12-13T04:52:04.459818259Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:04.464117 containerd[1627]: time="2024-12-13T04:52:04.464071259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:04.465696 containerd[1627]: time="2024-12-13T04:52:04.465651456Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.353164621s" Dec 13 04:52:04.465811 containerd[1627]: time="2024-12-13T04:52:04.465734251Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 04:52:04.501908 containerd[1627]: time="2024-12-13T04:52:04.501859579Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 04:52:07.711795 containerd[1627]: time="2024-12-13T04:52:07.710289348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:07.712596 containerd[1627]: time="2024-12-13T04:52:07.711956316Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Dec 13 04:52:07.712906 containerd[1627]: time="2024-12-13T04:52:07.712873243Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:07.717067 containerd[1627]: time="2024-12-13T04:52:07.717021389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:07.718811 containerd[1627]: time="2024-12-13T04:52:07.718748160Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 3.216604533s" Dec 13 04:52:07.718965 containerd[1627]: time="2024-12-13T04:52:07.718937270Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 04:52:07.753986 containerd[1627]: time="2024-12-13T04:52:07.753940213Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 04:52:09.300929 containerd[1627]: time="2024-12-13T04:52:09.300029683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:09.303570 containerd[1627]: time="2024-12-13T04:52:09.303376329Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Dec 13 04:52:09.304788 containerd[1627]: time="2024-12-13T04:52:09.304530612Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:09.314285 containerd[1627]: time="2024-12-13T04:52:09.314193192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:09.315804 containerd[1627]: time="2024-12-13T04:52:09.315215861Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.560965356s" Dec 13 04:52:09.315804 containerd[1627]: time="2024-12-13T04:52:09.315268407Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 04:52:09.343177 containerd[1627]: time="2024-12-13T04:52:09.343112452Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 04:52:11.102208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3366181471.mount: Deactivated successfully. Dec 13 04:52:11.783527 containerd[1627]: time="2024-12-13T04:52:11.783375377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:11.784993 containerd[1627]: time="2024-12-13T04:52:11.784935170Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Dec 13 04:52:11.785804 containerd[1627]: time="2024-12-13T04:52:11.785553598Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:11.789471 containerd[1627]: time="2024-12-13T04:52:11.789339849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:11.791199 containerd[1627]: time="2024-12-13T04:52:11.790429570Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.446887809s" Dec 13 04:52:11.791199 containerd[1627]: time="2024-12-13T04:52:11.790537161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 04:52:11.824813 containerd[1627]: time="2024-12-13T04:52:11.824510316Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 04:52:12.491008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount352988482.mount: Deactivated successfully. Dec 13 04:52:13.732874 containerd[1627]: time="2024-12-13T04:52:13.732616884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:13.786249 containerd[1627]: time="2024-12-13T04:52:13.784826863Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 04:52:13.787648 containerd[1627]: time="2024-12-13T04:52:13.787096234Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:13.792098 containerd[1627]: time="2024-12-13T04:52:13.792056690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:13.793195 containerd[1627]: time="2024-12-13T04:52:13.793072605Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.968503784s" Dec 13 04:52:13.793791 containerd[1627]: time="2024-12-13T04:52:13.793140042Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 04:52:13.835827 containerd[1627]: time="2024-12-13T04:52:13.835690256Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 04:52:14.382173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 04:52:14.396170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:52:14.661682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:52:14.667932 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 04:52:14.770621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167717622.mount: Deactivated successfully. Dec 13 04:52:14.776316 containerd[1627]: time="2024-12-13T04:52:14.775080227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:14.778687 containerd[1627]: time="2024-12-13T04:52:14.778568076Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Dec 13 04:52:14.780801 containerd[1627]: time="2024-12-13T04:52:14.779319347Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:14.784804 containerd[1627]: time="2024-12-13T04:52:14.782514100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:14.784804 containerd[1627]: time="2024-12-13T04:52:14.784196465Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 948.141397ms" Dec 13 04:52:14.784804 containerd[1627]: time="2024-12-13T04:52:14.784237464Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 04:52:14.802676 kubelet[2280]: E1213 04:52:14.802563 2280 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:52:14.808080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:52:14.808474 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:52:14.827840 containerd[1627]: time="2024-12-13T04:52:14.827353899Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 04:52:15.505690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1313040304.mount: Deactivated successfully. Dec 13 04:52:16.642573 update_engine[1612]: I20241213 04:52:16.642349 1612 update_attempter.cc:509] Updating boot flags... Dec 13 04:52:16.724832 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2348) Dec 13 04:52:16.842797 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2349) Dec 13 04:52:18.470299 containerd[1627]: time="2024-12-13T04:52:18.470198809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:18.471777 containerd[1627]: time="2024-12-13T04:52:18.471698542Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Dec 13 04:52:18.473001 containerd[1627]: time="2024-12-13T04:52:18.472963902Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:18.490634 containerd[1627]: time="2024-12-13T04:52:18.490564683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:18.493727 containerd[1627]: time="2024-12-13T04:52:18.492749357Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.665337691s" Dec 13 04:52:18.493727 containerd[1627]: time="2024-12-13T04:52:18.492848022Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 04:52:24.063732 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:52:24.077609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:52:24.118074 systemd[1]: Reloading requested from client PID 2422 ('systemctl') (unit session-11.scope)... Dec 13 04:52:24.118341 systemd[1]: Reloading... Dec 13 04:52:24.314937 zram_generator::config[2462]: No configuration found. Dec 13 04:52:24.483825 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:52:24.584078 systemd[1]: Reloading finished in 464 ms. Dec 13 04:52:24.648586 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 04:52:24.649103 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 04:52:24.649904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:52:24.659373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:52:24.809017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:52:24.817977 (kubelet)[2537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 04:52:24.912138 kubelet[2537]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:52:24.912138 kubelet[2537]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 04:52:24.912138 kubelet[2537]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:52:24.912820 kubelet[2537]: I1213 04:52:24.912235 2537 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 04:52:25.143045 kubelet[2537]: I1213 04:52:25.142981 2537 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 04:52:25.143045 kubelet[2537]: I1213 04:52:25.143047 2537 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 04:52:25.143395 kubelet[2537]: I1213 04:52:25.143359 2537 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 04:52:25.182605 kubelet[2537]: E1213 04:52:25.182466 2537 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.18.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:25.182941 kubelet[2537]: I1213 04:52:25.182914 2537 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 04:52:25.207257 kubelet[2537]: I1213 04:52:25.206050 2537 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 04:52:25.207257 kubelet[2537]: I1213 04:52:25.206731 2537 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 04:52:25.208457 kubelet[2537]: I1213 04:52:25.208323 2537 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 04:52:25.209684 kubelet[2537]: I1213 04:52:25.208922 2537 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 04:52:25.209684 kubelet[2537]: I1213 04:52:25.208948 2537 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 04:52:25.209684 kubelet[2537]: I1213 04:52:25.209194 2537 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:52:25.209684 kubelet[2537]: I1213 04:52:25.209410 2537 kubelet.go:396] "Attempting to sync node with API server" Dec 13 04:52:25.209684 kubelet[2537]: I1213 04:52:25.209445 2537 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 04:52:25.210733 kubelet[2537]: I1213 04:52:25.210710 2537 kubelet.go:312] "Adding apiserver pod source" Dec 13 04:52:25.210879 kubelet[2537]: I1213 04:52:25.210859 2537 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 04:52:25.215898 kubelet[2537]: W1213 04:52:25.215800 2537 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.244.18.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-wy7pj.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:25.216160 kubelet[2537]: E1213 04:52:25.216137 2537 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.18.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-wy7pj.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:25.217520 kubelet[2537]: I1213 04:52:25.217484 2537 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 04:52:25.224284 kubelet[2537]: I1213 04:52:25.223543 2537 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 04:52:25.224284 kubelet[2537]: W1213 04:52:25.223713 2537 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 04:52:25.225555 kubelet[2537]: W1213 04:52:25.225057 2537 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.244.18.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:25.225555 kubelet[2537]: E1213 04:52:25.225123 2537 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.18.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:25.227435 kubelet[2537]: I1213 04:52:25.227397 2537 server.go:1256] "Started kubelet" Dec 13 04:52:25.229898 kubelet[2537]: I1213 04:52:25.229854 2537 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 04:52:25.230850 kubelet[2537]: I1213 04:52:25.230812 2537 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 04:52:25.231686 kubelet[2537]: I1213 04:52:25.231375 2537 server.go:461] "Adding debug handlers to kubelet server" Dec 13 04:52:25.249795 kubelet[2537]: I1213 04:52:25.248026 2537 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 04:52:25.249795 kubelet[2537]: I1213 04:52:25.248441 2537 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 04:52:25.253746 kubelet[2537]: I1213 04:52:25.253691 2537 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 04:52:25.276819 kubelet[2537]: E1213 04:52:25.271737 2537 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.18.230:6443/api/v1/namespaces/default/events\": dial tcp 10.244.18.230:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-wy7pj.gb1.brightbox.com.1810a36d08854830 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-wy7pj.gb1.brightbox.com,UID:srv-wy7pj.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-wy7pj.gb1.brightbox.com,},FirstTimestamp:2024-12-13 04:52:25.227356208 +0000 UTC m=+0.401839345,LastTimestamp:2024-12-13 04:52:25.227356208 +0000 UTC m=+0.401839345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-wy7pj.gb1.brightbox.com,}" Dec 13 04:52:25.276819 kubelet[2537]: E1213 04:52:25.275970 2537 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.18.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-wy7pj.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.18.230:6443: connect: connection refused" interval="200ms" Dec 13 04:52:25.278934 kubelet[2537]: I1213 04:52:25.278875 2537 factory.go:221] Registration of the systemd container factory successfully Dec 13 04:52:25.279150 kubelet[2537]: I1213 04:52:25.279101 2537 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 04:52:25.279371 kubelet[2537]: I1213 04:52:25.279344 2537 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 04:52:25.280224 kubelet[2537]: I1213 04:52:25.280200 2537 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 04:52:25.283497 kubelet[2537]: E1213 04:52:25.283455 2537 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 04:52:25.290909 kubelet[2537]: W1213 04:52:25.289216 2537 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.244.18.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:25.291053 kubelet[2537]: E1213 04:52:25.290939 2537 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.18.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:25.291490 kubelet[2537]: I1213 04:52:25.291457 2537 factory.go:221] Registration of the containerd container factory successfully Dec 13 04:52:25.327309 kubelet[2537]: I1213 04:52:25.327263 2537 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 04:52:25.329866 kubelet[2537]: I1213 04:52:25.329836 2537 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 04:52:25.330092 kubelet[2537]: I1213 04:52:25.330071 2537 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 04:52:25.330236 kubelet[2537]: I1213 04:52:25.330216 2537 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 04:52:25.330796 kubelet[2537]: E1213 04:52:25.330418 2537 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 04:52:25.338369 kubelet[2537]: I1213 04:52:25.338326 2537 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 04:52:25.338607 kubelet[2537]: I1213 04:52:25.338383 2537 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 04:52:25.338607 kubelet[2537]: I1213 04:52:25.338426 2537 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:52:25.345110 kubelet[2537]: W1213 04:52:25.344971 2537 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.244.18.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:25.345110 kubelet[2537]: E1213 04:52:25.345044 2537 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.18.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:25.369155 kubelet[2537]: I1213 04:52:25.369059 2537 policy_none.go:49] "None policy: Start" Dec 13 04:52:25.370121 kubelet[2537]: I1213 04:52:25.370098 2537 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 04:52:25.370565 kubelet[2537]: I1213 04:52:25.370368 2537 state_mem.go:35] "Initializing new in-memory state store" Dec 13 04:52:25.392326 kubelet[2537]: I1213 04:52:25.392092 2537 kubelet_node_status.go:73] "Attempting to register node" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.394844 kubelet[2537]: E1213 04:52:25.392988 2537 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.18.230:6443/api/v1/nodes\": dial tcp 10.244.18.230:6443: connect: connection refused" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.394844 kubelet[2537]: I1213 04:52:25.392992 2537 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 04:52:25.394844 kubelet[2537]: I1213 04:52:25.393396 2537 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 04:52:25.404067 kubelet[2537]: E1213 04:52:25.404015 2537 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-wy7pj.gb1.brightbox.com\" not found" Dec 13 04:52:25.431366 kubelet[2537]: I1213 04:52:25.431280 2537 topology_manager.go:215] "Topology Admit Handler" podUID="bb37615044e16e5e6e16ae5548e7a228" podNamespace="kube-system" podName="kube-apiserver-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.448123 kubelet[2537]: I1213 04:52:25.447734 2537 topology_manager.go:215] "Topology Admit Handler" podUID="d9a83a311040f12c782c8a031badb3d8" podNamespace="kube-system" podName="kube-controller-manager-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.455273 kubelet[2537]: I1213 04:52:25.454920 2537 topology_manager.go:215] "Topology Admit Handler" podUID="d897aae3aff19f07db301467cd965b2c" podNamespace="kube-system" podName="kube-scheduler-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.477633 kubelet[2537]: E1213 04:52:25.477587 2537 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.18.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-wy7pj.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.18.230:6443: connect: connection refused" interval="400ms" Dec 13 04:52:25.591720 kubelet[2537]: I1213 04:52:25.591622 2537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb37615044e16e5e6e16ae5548e7a228-ca-certs\") pod \"kube-apiserver-srv-wy7pj.gb1.brightbox.com\" (UID: \"bb37615044e16e5e6e16ae5548e7a228\") " pod="kube-system/kube-apiserver-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.591720 kubelet[2537]: I1213 04:52:25.591704 2537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9a83a311040f12c782c8a031badb3d8-ca-certs\") pod \"kube-controller-manager-srv-wy7pj.gb1.brightbox.com\" (UID: \"d9a83a311040f12c782c8a031badb3d8\") " pod="kube-system/kube-controller-manager-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.591720 kubelet[2537]: I1213 04:52:25.591746 2537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d9a83a311040f12c782c8a031badb3d8-flexvolume-dir\") pod \"kube-controller-manager-srv-wy7pj.gb1.brightbox.com\" (UID: \"d9a83a311040f12c782c8a031badb3d8\") " pod="kube-system/kube-controller-manager-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.592126 kubelet[2537]: I1213 04:52:25.591804 2537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9a83a311040f12c782c8a031badb3d8-k8s-certs\") pod \"kube-controller-manager-srv-wy7pj.gb1.brightbox.com\" (UID: \"d9a83a311040f12c782c8a031badb3d8\") " pod="kube-system/kube-controller-manager-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.592126 kubelet[2537]: I1213 04:52:25.591848 2537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9a83a311040f12c782c8a031badb3d8-kubeconfig\") pod \"kube-controller-manager-srv-wy7pj.gb1.brightbox.com\" (UID: \"d9a83a311040f12c782c8a031badb3d8\") " pod="kube-system/kube-controller-manager-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.592126 kubelet[2537]: I1213 04:52:25.591883 2537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9a83a311040f12c782c8a031badb3d8-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-wy7pj.gb1.brightbox.com\" (UID: \"d9a83a311040f12c782c8a031badb3d8\") " pod="kube-system/kube-controller-manager-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.592126 kubelet[2537]: I1213 04:52:25.591914 2537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d897aae3aff19f07db301467cd965b2c-kubeconfig\") pod \"kube-scheduler-srv-wy7pj.gb1.brightbox.com\" (UID: \"d897aae3aff19f07db301467cd965b2c\") " pod="kube-system/kube-scheduler-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.592126 kubelet[2537]: I1213 04:52:25.591943 2537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb37615044e16e5e6e16ae5548e7a228-k8s-certs\") pod \"kube-apiserver-srv-wy7pj.gb1.brightbox.com\" (UID: \"bb37615044e16e5e6e16ae5548e7a228\") " pod="kube-system/kube-apiserver-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.592356 kubelet[2537]: I1213 04:52:25.591979 2537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb37615044e16e5e6e16ae5548e7a228-usr-share-ca-certificates\") pod \"kube-apiserver-srv-wy7pj.gb1.brightbox.com\" (UID: \"bb37615044e16e5e6e16ae5548e7a228\") " pod="kube-system/kube-apiserver-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.596672 kubelet[2537]: I1213 04:52:25.596161 2537 kubelet_node_status.go:73] "Attempting to register node" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.596672 kubelet[2537]: E1213 04:52:25.596597 2537 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.18.230:6443/api/v1/nodes\": dial tcp 10.244.18.230:6443: connect: connection refused" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:25.763376 containerd[1627]: time="2024-12-13T04:52:25.763144898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-wy7pj.gb1.brightbox.com,Uid:bb37615044e16e5e6e16ae5548e7a228,Namespace:kube-system,Attempt:0,}" Dec 13 04:52:25.773174 containerd[1627]: time="2024-12-13T04:52:25.773124683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-wy7pj.gb1.brightbox.com,Uid:d9a83a311040f12c782c8a031badb3d8,Namespace:kube-system,Attempt:0,}" Dec 13 04:52:25.774903 containerd[1627]: time="2024-12-13T04:52:25.774610335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-wy7pj.gb1.brightbox.com,Uid:d897aae3aff19f07db301467cd965b2c,Namespace:kube-system,Attempt:0,}" Dec 13 04:52:25.878804 kubelet[2537]: E1213 04:52:25.878731 2537 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.18.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-wy7pj.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.18.230:6443: connect: connection refused" interval="800ms" Dec 13 04:52:26.000398 kubelet[2537]: I1213 04:52:25.999874 2537 kubelet_node_status.go:73] "Attempting to register node" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:26.000398 kubelet[2537]: E1213 04:52:26.000346 2537 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.18.230:6443/api/v1/nodes\": dial tcp 10.244.18.230:6443: connect: connection refused" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:26.274024 kubelet[2537]: W1213 04:52:26.273904 2537 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.244.18.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:26.274024 kubelet[2537]: E1213 04:52:26.273975 2537 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.18.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:26.274517 kubelet[2537]: W1213 04:52:26.274431 2537 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.244.18.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:26.274517 kubelet[2537]: E1213 04:52:26.274472 2537 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.18.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:26.353651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918906267.mount: Deactivated successfully. Dec 13 04:52:26.360199 containerd[1627]: time="2024-12-13T04:52:26.360139237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 04:52:26.362278 containerd[1627]: time="2024-12-13T04:52:26.362199189Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 04:52:26.363912 containerd[1627]: time="2024-12-13T04:52:26.363864369Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 04:52:26.368905 containerd[1627]: time="2024-12-13T04:52:26.368846839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 04:52:26.370632 containerd[1627]: time="2024-12-13T04:52:26.369965201Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 04:52:26.372651 containerd[1627]: time="2024-12-13T04:52:26.371927194Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 04:52:26.374112 containerd[1627]: time="2024-12-13T04:52:26.374046771Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 04:52:26.378065 containerd[1627]: time="2024-12-13T04:52:26.377744035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 614.408025ms" Dec 13 04:52:26.379552 containerd[1627]: time="2024-12-13T04:52:26.378741747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 04:52:26.382091 containerd[1627]: time="2024-12-13T04:52:26.382050124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 608.831813ms" Dec 13 04:52:26.384182 containerd[1627]: time="2024-12-13T04:52:26.384129571Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.449766ms" Dec 13 04:52:26.582478 containerd[1627]: time="2024-12-13T04:52:26.581692747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:52:26.582478 containerd[1627]: time="2024-12-13T04:52:26.581801760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:52:26.582478 containerd[1627]: time="2024-12-13T04:52:26.581852240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:52:26.582478 containerd[1627]: time="2024-12-13T04:52:26.582063821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:52:26.591336 containerd[1627]: time="2024-12-13T04:52:26.587196201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:52:26.591336 containerd[1627]: time="2024-12-13T04:52:26.589942573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:52:26.591336 containerd[1627]: time="2024-12-13T04:52:26.589968926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:52:26.591336 containerd[1627]: time="2024-12-13T04:52:26.590115051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:52:26.592359 containerd[1627]: time="2024-12-13T04:52:26.592254146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:52:26.593449 containerd[1627]: time="2024-12-13T04:52:26.593386574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:52:26.593614 containerd[1627]: time="2024-12-13T04:52:26.593529627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:52:26.594037 containerd[1627]: time="2024-12-13T04:52:26.593908023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:52:26.681090 kubelet[2537]: E1213 04:52:26.680998 2537 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.18.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-wy7pj.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.18.230:6443: connect: connection refused" interval="1.6s" Dec 13 04:52:26.719829 kubelet[2537]: W1213 04:52:26.719541 2537 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.244.18.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:26.719829 kubelet[2537]: E1213 04:52:26.719621 2537 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.18.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:26.735465 kubelet[2537]: W1213 04:52:26.735362 2537 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.244.18.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-wy7pj.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:26.735465 kubelet[2537]: E1213 04:52:26.735478 2537 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.18.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-wy7pj.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:26.740965 containerd[1627]: time="2024-12-13T04:52:26.740892453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-wy7pj.gb1.brightbox.com,Uid:bb37615044e16e5e6e16ae5548e7a228,Namespace:kube-system,Attempt:0,} returns sandbox id \"80d1f3962e6835075d2ecd683b9eedd282edea1078f14064897f08a8aece782d\"" Dec 13 04:52:26.748722 containerd[1627]: time="2024-12-13T04:52:26.748570635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-wy7pj.gb1.brightbox.com,Uid:d9a83a311040f12c782c8a031badb3d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4883fa1ceb924194f2f7ea34931a1065f28f0a6e81564c7e7b6455efa51facc\"" Dec 13 04:52:26.758202 containerd[1627]: time="2024-12-13T04:52:26.758059030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-wy7pj.gb1.brightbox.com,Uid:d897aae3aff19f07db301467cd965b2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9669d287ebc2f6876c67f3150ce9c53b5d84699af375b0e91170c7c035410441\"" Dec 13 04:52:26.761795 containerd[1627]: time="2024-12-13T04:52:26.761137287Z" level=info msg="CreateContainer within sandbox \"80d1f3962e6835075d2ecd683b9eedd282edea1078f14064897f08a8aece782d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 04:52:26.763139 containerd[1627]: time="2024-12-13T04:52:26.762867957Z" level=info msg="CreateContainer within sandbox \"f4883fa1ceb924194f2f7ea34931a1065f28f0a6e81564c7e7b6455efa51facc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 04:52:26.767857 containerd[1627]: time="2024-12-13T04:52:26.767811167Z" level=info msg="CreateContainer within sandbox \"9669d287ebc2f6876c67f3150ce9c53b5d84699af375b0e91170c7c035410441\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 04:52:26.799878 containerd[1627]: time="2024-12-13T04:52:26.799662605Z" level=info msg="CreateContainer within sandbox \"80d1f3962e6835075d2ecd683b9eedd282edea1078f14064897f08a8aece782d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1fca175872faef3f12cd0d3b7f6cd0a4c3b9e39e5844f49e572ed474fc084f7b\"" Dec 13 04:52:26.800924 containerd[1627]: time="2024-12-13T04:52:26.800746846Z" level=info msg="StartContainer for \"1fca175872faef3f12cd0d3b7f6cd0a4c3b9e39e5844f49e572ed474fc084f7b\"" Dec 13 04:52:26.804538 kubelet[2537]: I1213 04:52:26.804398 2537 kubelet_node_status.go:73] "Attempting to register node" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:26.805552 kubelet[2537]: E1213 04:52:26.805267 2537 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.18.230:6443/api/v1/nodes\": dial tcp 10.244.18.230:6443: connect: connection refused" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:26.807218 containerd[1627]: time="2024-12-13T04:52:26.807107489Z" level=info msg="CreateContainer within sandbox \"9669d287ebc2f6876c67f3150ce9c53b5d84699af375b0e91170c7c035410441\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2a8cc011bea14d89f75c03b4ff0a077f74897f9e19a1f60daff1e25607bb6b95\"" Dec 13 04:52:26.808256 containerd[1627]: time="2024-12-13T04:52:26.808208267Z" level=info msg="StartContainer for \"2a8cc011bea14d89f75c03b4ff0a077f74897f9e19a1f60daff1e25607bb6b95\"" Dec 13 04:52:26.812789 containerd[1627]: time="2024-12-13T04:52:26.812620556Z" level=info msg="CreateContainer within sandbox \"f4883fa1ceb924194f2f7ea34931a1065f28f0a6e81564c7e7b6455efa51facc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b17608410cfdb36de833b2acee89fb094c42b1ddbaf8dfc525b58717c153cd6b\"" Dec 13 04:52:26.814853 containerd[1627]: time="2024-12-13T04:52:26.813376511Z" level=info msg="StartContainer for \"b17608410cfdb36de833b2acee89fb094c42b1ddbaf8dfc525b58717c153cd6b\"" Dec 13 04:52:26.995402 containerd[1627]: time="2024-12-13T04:52:26.995338352Z" level=info msg="StartContainer for \"1fca175872faef3f12cd0d3b7f6cd0a4c3b9e39e5844f49e572ed474fc084f7b\" returns successfully" Dec 13 04:52:26.998428 containerd[1627]: time="2024-12-13T04:52:26.996019685Z" level=info msg="StartContainer for \"2a8cc011bea14d89f75c03b4ff0a077f74897f9e19a1f60daff1e25607bb6b95\" returns successfully" Dec 13 04:52:27.012533 containerd[1627]: time="2024-12-13T04:52:27.011805484Z" level=info msg="StartContainer for \"b17608410cfdb36de833b2acee89fb094c42b1ddbaf8dfc525b58717c153cd6b\" returns successfully" Dec 13 04:52:27.357569 kubelet[2537]: E1213 04:52:27.357314 2537 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.18.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.18.230:6443: connect: connection refused Dec 13 04:52:28.410889 kubelet[2537]: I1213 04:52:28.410308 2537 kubelet_node_status.go:73] "Attempting to register node" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:29.748743 kubelet[2537]: E1213 04:52:29.748673 2537 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-wy7pj.gb1.brightbox.com\" not found" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:29.817415 kubelet[2537]: I1213 04:52:29.817352 2537 kubelet_node_status.go:76] "Successfully registered node" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:30.220106 kubelet[2537]: I1213 04:52:30.220014 2537 apiserver.go:52] "Watching apiserver" Dec 13 04:52:30.280472 kubelet[2537]: I1213 04:52:30.280367 2537 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 04:52:32.565259 systemd[1]: Reloading requested from client PID 2813 ('systemctl') (unit session-11.scope)... Dec 13 04:52:32.565792 systemd[1]: Reloading... Dec 13 04:52:32.697843 zram_generator::config[2848]: No configuration found. Dec 13 04:52:32.929850 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:52:33.045669 systemd[1]: Reloading finished in 479 ms. Dec 13 04:52:33.104097 kubelet[2537]: I1213 04:52:33.104002 2537 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 04:52:33.104142 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:52:33.120718 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 04:52:33.121473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:52:33.132151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 04:52:33.398990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 04:52:33.413731 (kubelet)[2926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 04:52:33.606971 sudo[2938]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 04:52:33.607608 sudo[2938]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 04:52:33.627295 kubelet[2926]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:52:33.627295 kubelet[2926]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 04:52:33.627295 kubelet[2926]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:52:33.627965 kubelet[2926]: I1213 04:52:33.627439 2926 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 04:52:33.635071 kubelet[2926]: I1213 04:52:33.635030 2926 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 04:52:33.635071 kubelet[2926]: I1213 04:52:33.635069 2926 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 04:52:33.635660 kubelet[2926]: I1213 04:52:33.635636 2926 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 04:52:33.640993 kubelet[2926]: I1213 04:52:33.640964 2926 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 04:52:33.647427 kubelet[2926]: I1213 04:52:33.645820 2926 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 04:52:33.671626 kubelet[2926]: I1213 04:52:33.671472 2926 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 04:52:33.674559 kubelet[2926]: I1213 04:52:33.674518 2926 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 04:52:33.674945 kubelet[2926]: I1213 04:52:33.674899 2926 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 04:52:33.675212 kubelet[2926]: I1213 04:52:33.675013 2926 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 04:52:33.675212 kubelet[2926]: I1213 04:52:33.675053 2926 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 04:52:33.675212 kubelet[2926]: I1213 04:52:33.675154 2926 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:52:33.676546 kubelet[2926]: I1213 04:52:33.675992 2926 kubelet.go:396] "Attempting to sync node with API server" Dec 13 04:52:33.676546 kubelet[2926]: I1213 04:52:33.676090 2926 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 04:52:33.676546 kubelet[2926]: I1213 04:52:33.676167 2926 kubelet.go:312] "Adding apiserver pod source" Dec 13 04:52:33.676546 kubelet[2926]: I1213 04:52:33.676213 2926 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 04:52:33.679983 kubelet[2926]: I1213 04:52:33.679380 2926 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 04:52:33.679983 kubelet[2926]: I1213 04:52:33.679838 2926 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 04:52:33.683324 kubelet[2926]: I1213 04:52:33.683300 2926 server.go:1256] "Started kubelet" Dec 13 04:52:33.686358 kubelet[2926]: I1213 04:52:33.686315 2926 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 04:52:33.700000 kubelet[2926]: I1213 04:52:33.698242 2926 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 04:52:33.700000 kubelet[2926]: I1213 04:52:33.699638 2926 server.go:461] "Adding debug handlers to kubelet server" Dec 13 04:52:33.710486 kubelet[2926]: I1213 04:52:33.703881 2926 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 04:52:33.710486 kubelet[2926]: I1213 04:52:33.704207 2926 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 04:52:33.713779 kubelet[2926]: I1213 04:52:33.712901 2926 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 04:52:33.713779 kubelet[2926]: I1213 04:52:33.713479 2926 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 04:52:33.713779 kubelet[2926]: I1213 04:52:33.713748 2926 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 04:52:33.732339 kubelet[2926]: I1213 04:52:33.729454 2926 factory.go:221] Registration of the systemd container factory successfully Dec 13 04:52:33.732339 kubelet[2926]: I1213 04:52:33.729602 2926 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 04:52:33.747827 kubelet[2926]: I1213 04:52:33.745950 2926 factory.go:221] Registration of the containerd container factory successfully Dec 13 04:52:33.755592 kubelet[2926]: E1213 04:52:33.754134 2926 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 04:52:33.787109 kubelet[2926]: I1213 04:52:33.787015 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 04:52:33.792606 kubelet[2926]: I1213 04:52:33.791815 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 04:52:33.792606 kubelet[2926]: I1213 04:52:33.791874 2926 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 04:52:33.792606 kubelet[2926]: I1213 04:52:33.791912 2926 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 04:52:33.792606 kubelet[2926]: E1213 04:52:33.791999 2926 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 04:52:33.841188 kubelet[2926]: I1213 04:52:33.839208 2926 kubelet_node_status.go:73] "Attempting to register node" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:33.860748 kubelet[2926]: I1213 04:52:33.860156 2926 kubelet_node_status.go:112] "Node was previously registered" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:33.860748 kubelet[2926]: I1213 04:52:33.860550 2926 kubelet_node_status.go:76] "Successfully registered node" node="srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:33.893074 kubelet[2926]: E1213 04:52:33.893017 2926 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 04:52:33.911917 kubelet[2926]: I1213 04:52:33.911878 2926 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 04:52:33.911917 kubelet[2926]: I1213 04:52:33.911914 2926 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 04:52:33.912123 kubelet[2926]: I1213 04:52:33.911948 2926 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:52:33.912794 kubelet[2926]: I1213 04:52:33.912235 2926 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 04:52:33.912794 kubelet[2926]: I1213 04:52:33.912285 2926 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 04:52:33.912794 kubelet[2926]: I1213 04:52:33.912309 2926 policy_none.go:49] "None policy: Start" Dec 13 04:52:33.913427 kubelet[2926]: I1213 04:52:33.913319 2926 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 04:52:33.913551 kubelet[2926]: I1213 04:52:33.913500 2926 state_mem.go:35] "Initializing new in-memory state store" Dec 13 04:52:33.914064 kubelet[2926]: I1213 04:52:33.913714 2926 state_mem.go:75] "Updated machine memory state" Dec 13 04:52:33.916500 kubelet[2926]: I1213 04:52:33.916234 2926 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 04:52:33.918342 kubelet[2926]: I1213 04:52:33.918317 2926 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 04:52:34.094791 kubelet[2926]: I1213 04:52:34.094053 2926 topology_manager.go:215] "Topology Admit Handler" podUID="bb37615044e16e5e6e16ae5548e7a228" podNamespace="kube-system" podName="kube-apiserver-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:34.094791 kubelet[2926]: I1213 04:52:34.094373 2926 topology_manager.go:215] "Topology Admit Handler" podUID="d9a83a311040f12c782c8a031badb3d8" podNamespace="kube-system" podName="kube-controller-manager-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:34.094791 kubelet[2926]: I1213 04:52:34.094491 2926 topology_manager.go:215] "Topology Admit Handler" podUID="d897aae3aff19f07db301467cd965b2c" podNamespace="kube-system" podName="kube-scheduler-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:34.119063 kubelet[2926]: I1213 04:52:34.119014 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb37615044e16e5e6e16ae5548e7a228-ca-certs\") pod \"kube-apiserver-srv-wy7pj.gb1.brightbox.com\" (UID: \"bb37615044e16e5e6e16ae5548e7a228\") " pod="kube-system/kube-apiserver-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:34.123847 kubelet[2926]: I1213 04:52:34.121890 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb37615044e16e5e6e16ae5548e7a228-k8s-certs\") pod \"kube-apiserver-srv-wy7pj.gb1.brightbox.com\" (UID: \"bb37615044e16e5e6e16ae5548e7a228\") " pod="kube-system/kube-apiserver-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:34.123847 kubelet[2926]: W1213 04:52:34.119700 2926 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 04:52:34.123847 kubelet[2926]: I1213 04:52:34.121956 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9a83a311040f12c782c8a031badb3d8-ca-certs\") pod \"kube-controller-manager-srv-wy7pj.gb1.brightbox.com\" (UID: \"d9a83a311040f12c782c8a031badb3d8\") " pod="kube-system/kube-controller-manager-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:34.123847 kubelet[2926]: I1213 04:52:34.121994 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9a83a311040f12c782c8a031badb3d8-kubeconfig\") pod \"kube-controller-manager-srv-wy7pj.gb1.brightbox.com\" (UID: \"d9a83a311040f12c782c8a031badb3d8\") " pod="kube-system/kube-controller-manager-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:34.123847 kubelet[2926]: I1213 04:52:34.122050 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb37615044e16e5e6e16ae5548e7a228-usr-share-ca-certificates\") pod \"kube-apiserver-srv-wy7pj.gb1.brightbox.com\" (UID: \"bb37615044e16e5e6e16ae5548e7a228\") " pod="kube-system/kube-apiserver-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:34.124242 kubelet[2926]: I1213 04:52:34.122091 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d9a83a311040f12c782c8a031badb3d8-flexvolume-dir\") pod \"kube-controller-manager-srv-wy7pj.gb1.brightbox.com\" (UID: \"d9a83a311040f12c782c8a031badb3d8\") " pod="kube-system/kube-controller-manager-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:34.124242 kubelet[2926]: I1213 04:52:34.122133 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9a83a311040f12c782c8a031badb3d8-k8s-certs\") pod \"kube-controller-manager-srv-wy7pj.gb1.brightbox.com\" (UID: \"d9a83a311040f12c782c8a031badb3d8\") " pod="kube-system/kube-controller-manager-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:34.124242 kubelet[2926]: I1213 04:52:34.122170 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9a83a311040f12c782c8a031badb3d8-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-wy7pj.gb1.brightbox.com\" (UID: \"d9a83a311040f12c782c8a031badb3d8\") " pod="kube-system/kube-controller-manager-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:34.124242 kubelet[2926]: I1213 04:52:34.122203 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d897aae3aff19f07db301467cd965b2c-kubeconfig\") pod \"kube-scheduler-srv-wy7pj.gb1.brightbox.com\" (UID: \"d897aae3aff19f07db301467cd965b2c\") " pod="kube-system/kube-scheduler-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:34.125602 kubelet[2926]: W1213 04:52:34.125309 2926 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 04:52:34.125602 kubelet[2926]: W1213 04:52:34.125328 2926 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 04:52:34.518967 sudo[2938]: pam_unix(sudo:session): session closed for user root Dec 13 04:52:34.689068 kubelet[2926]: I1213 04:52:34.689032 2926 apiserver.go:52] "Watching apiserver" Dec 13 04:52:34.714219 kubelet[2926]: I1213 04:52:34.714052 2926 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 04:52:34.865902 kubelet[2926]: W1213 04:52:34.864837 2926 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 04:52:34.867657 kubelet[2926]: E1213 04:52:34.867297 2926 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-wy7pj.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-wy7pj.gb1.brightbox.com" Dec 13 04:52:34.899832 kubelet[2926]: I1213 04:52:34.899210 2926 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-wy7pj.gb1.brightbox.com" podStartSLOduration=0.899109705 podStartE2EDuration="899.109705ms" podCreationTimestamp="2024-12-13 04:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:52:34.886097052 +0000 UTC m=+1.441415587" watchObservedRunningTime="2024-12-13 04:52:34.899109705 +0000 UTC m=+1.454428212" Dec 13 04:52:34.915788 kubelet[2926]: I1213 04:52:34.915463 2926 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-wy7pj.gb1.brightbox.com" podStartSLOduration=0.915399675 podStartE2EDuration="915.399675ms" podCreationTimestamp="2024-12-13 04:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:52:34.900903629 +0000 UTC m=+1.456222140" watchObservedRunningTime="2024-12-13 04:52:34.915399675 +0000 UTC m=+1.470718229" Dec 13 04:52:34.930465 kubelet[2926]: I1213 04:52:34.929746 2926 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-wy7pj.gb1.brightbox.com" podStartSLOduration=0.929689259 podStartE2EDuration="929.689259ms" podCreationTimestamp="2024-12-13 04:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:52:34.9163708 +0000 UTC m=+1.471689332" watchObservedRunningTime="2024-12-13 04:52:34.929689259 +0000 UTC m=+1.485007771" Dec 13 04:52:36.915693 sudo[1942]: pam_unix(sudo:session): session closed for user root Dec 13 04:52:37.064384 sshd[1938]: pam_unix(sshd:session): session closed for user core Dec 13 04:52:37.069608 systemd[1]: sshd@8-10.244.18.230:22-147.75.109.163:35220.service: Deactivated successfully. Dec 13 04:52:37.075710 systemd-logind[1609]: Session 11 logged out. Waiting for processes to exit. Dec 13 04:52:37.077262 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 04:52:37.079391 systemd-logind[1609]: Removed session 11. Dec 13 04:52:45.260092 kubelet[2926]: I1213 04:52:45.259782 2926 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 04:52:45.263312 containerd[1627]: time="2024-12-13T04:52:45.262884297Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 04:52:45.264077 kubelet[2926]: I1213 04:52:45.263832 2926 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 04:52:45.776622 kubelet[2926]: I1213 04:52:45.775361 2926 topology_manager.go:215] "Topology Admit Handler" podUID="cd98ab6d-4a31-400e-9a1c-503114be90cd" podNamespace="kube-system" podName="kube-proxy-p78zt" Dec 13 04:52:45.814360 kubelet[2926]: I1213 04:52:45.814246 2926 topology_manager.go:215] "Topology Admit Handler" podUID="ca7bef37-ff4a-44c8-ab45-a3f985966c6b" podNamespace="kube-system" podName="cilium-h8j2c" Dec 13 04:52:45.940285 kubelet[2926]: I1213 04:52:45.940177 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd98ab6d-4a31-400e-9a1c-503114be90cd-lib-modules\") pod \"kube-proxy-p78zt\" (UID: \"cd98ab6d-4a31-400e-9a1c-503114be90cd\") " pod="kube-system/kube-proxy-p78zt" Dec 13 04:52:45.941247 kubelet[2926]: I1213 04:52:45.940247 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-xtables-lock\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:45.941247 kubelet[2926]: I1213 04:52:45.940511 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-hubble-tls\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:45.941247 kubelet[2926]: I1213 04:52:45.940559 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptsdt\" (UniqueName: \"kubernetes.io/projected/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-kube-api-access-ptsdt\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:45.941247 kubelet[2926]: I1213 04:52:45.940607 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-hostproc\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:45.941247 kubelet[2926]: I1213 04:52:45.940642 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cilium-cgroup\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:45.941247 kubelet[2926]: I1213 04:52:45.940707 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-lib-modules\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:45.941680 kubelet[2926]: I1213 04:52:45.940743 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cilium-run\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:45.941680 kubelet[2926]: I1213 04:52:45.940803 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-host-proc-sys-net\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:45.941680 kubelet[2926]: I1213 04:52:45.940841 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzwff\" (UniqueName: \"kubernetes.io/projected/cd98ab6d-4a31-400e-9a1c-503114be90cd-kube-api-access-bzwff\") pod \"kube-proxy-p78zt\" (UID: \"cd98ab6d-4a31-400e-9a1c-503114be90cd\") " pod="kube-system/kube-proxy-p78zt" Dec 13 04:52:45.941680 kubelet[2926]: I1213 04:52:45.940870 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cni-path\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:45.941680 kubelet[2926]: I1213 04:52:45.940912 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd98ab6d-4a31-400e-9a1c-503114be90cd-xtables-lock\") pod \"kube-proxy-p78zt\" (UID: \"cd98ab6d-4a31-400e-9a1c-503114be90cd\") " pod="kube-system/kube-proxy-p78zt" Dec 13 04:52:45.941680 kubelet[2926]: I1213 04:52:45.940946 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-bpf-maps\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:45.942002 kubelet[2926]: I1213 04:52:45.941008 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cilium-config-path\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:45.942002 kubelet[2926]: I1213 04:52:45.941045 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-etc-cni-netd\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:45.942002 kubelet[2926]: I1213 04:52:45.941081 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-host-proc-sys-kernel\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:45.942002 kubelet[2926]: I1213 04:52:45.941123 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cd98ab6d-4a31-400e-9a1c-503114be90cd-kube-proxy\") pod \"kube-proxy-p78zt\" (UID: \"cd98ab6d-4a31-400e-9a1c-503114be90cd\") " pod="kube-system/kube-proxy-p78zt" Dec 13 04:52:45.942002 kubelet[2926]: I1213 04:52:45.941175 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-clustermesh-secrets\") pod \"cilium-h8j2c\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " pod="kube-system/cilium-h8j2c" Dec 13 04:52:46.091160 kubelet[2926]: E1213 04:52:46.089123 2926 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 04:52:46.091160 kubelet[2926]: E1213 04:52:46.089218 2926 projected.go:200] Error preparing data for projected volume kube-api-access-ptsdt for pod kube-system/cilium-h8j2c: configmap "kube-root-ca.crt" not found Dec 13 04:52:46.091160 kubelet[2926]: E1213 04:52:46.089124 2926 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 04:52:46.091160 kubelet[2926]: E1213 04:52:46.089298 2926 projected.go:200] Error preparing data for projected volume kube-api-access-bzwff for pod kube-system/kube-proxy-p78zt: configmap "kube-root-ca.crt" not found Dec 13 04:52:46.091160 kubelet[2926]: E1213 04:52:46.089424 2926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-kube-api-access-ptsdt podName:ca7bef37-ff4a-44c8-ab45-a3f985966c6b nodeName:}" failed. No retries permitted until 2024-12-13 04:52:46.589363341 +0000 UTC m=+13.144681844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ptsdt" (UniqueName: "kubernetes.io/projected/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-kube-api-access-ptsdt") pod "cilium-h8j2c" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b") : configmap "kube-root-ca.crt" not found Dec 13 04:52:46.091160 kubelet[2926]: E1213 04:52:46.089453 2926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd98ab6d-4a31-400e-9a1c-503114be90cd-kube-api-access-bzwff podName:cd98ab6d-4a31-400e-9a1c-503114be90cd nodeName:}" failed. No retries permitted until 2024-12-13 04:52:46.589440982 +0000 UTC m=+13.144759486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bzwff" (UniqueName: "kubernetes.io/projected/cd98ab6d-4a31-400e-9a1c-503114be90cd-kube-api-access-bzwff") pod "kube-proxy-p78zt" (UID: "cd98ab6d-4a31-400e-9a1c-503114be90cd") : configmap "kube-root-ca.crt" not found Dec 13 04:52:46.257459 kubelet[2926]: I1213 04:52:46.257096 2926 topology_manager.go:215] "Topology Admit Handler" podUID="f8759128-e5eb-4636-8732-0976cf9da43d" podNamespace="kube-system" podName="cilium-operator-5cc964979-xsh5n" Dec 13 04:52:46.345202 kubelet[2926]: I1213 04:52:46.345000 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7kwx\" (UniqueName: \"kubernetes.io/projected/f8759128-e5eb-4636-8732-0976cf9da43d-kube-api-access-x7kwx\") pod \"cilium-operator-5cc964979-xsh5n\" (UID: \"f8759128-e5eb-4636-8732-0976cf9da43d\") " pod="kube-system/cilium-operator-5cc964979-xsh5n" Dec 13 04:52:46.345202 kubelet[2926]: I1213 04:52:46.345114 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8759128-e5eb-4636-8732-0976cf9da43d-cilium-config-path\") pod \"cilium-operator-5cc964979-xsh5n\" (UID: \"f8759128-e5eb-4636-8732-0976cf9da43d\") " pod="kube-system/cilium-operator-5cc964979-xsh5n" Dec 13 04:52:46.565418 containerd[1627]: time="2024-12-13T04:52:46.565349761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-xsh5n,Uid:f8759128-e5eb-4636-8732-0976cf9da43d,Namespace:kube-system,Attempt:0,}" Dec 13 04:52:46.622215 containerd[1627]: time="2024-12-13T04:52:46.621887799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:52:46.622215 containerd[1627]: time="2024-12-13T04:52:46.622029812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:52:46.622215 containerd[1627]: time="2024-12-13T04:52:46.622050720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:52:46.623043 containerd[1627]: time="2024-12-13T04:52:46.622936403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:52:46.692754 containerd[1627]: time="2024-12-13T04:52:46.692696351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p78zt,Uid:cd98ab6d-4a31-400e-9a1c-503114be90cd,Namespace:kube-system,Attempt:0,}" Dec 13 04:52:46.744425 containerd[1627]: time="2024-12-13T04:52:46.743796719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-xsh5n,Uid:f8759128-e5eb-4636-8732-0976cf9da43d,Namespace:kube-system,Attempt:0,} returns sandbox id \"13882a88b4b44d1850edbe052b4668bbb26d87a0f28f2bb7e3de417c1874b806\"" Dec 13 04:52:46.745737 containerd[1627]: time="2024-12-13T04:52:46.745701455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h8j2c,Uid:ca7bef37-ff4a-44c8-ab45-a3f985966c6b,Namespace:kube-system,Attempt:0,}" Dec 13 04:52:46.750005 containerd[1627]: time="2024-12-13T04:52:46.749952381Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 04:52:46.755597 containerd[1627]: time="2024-12-13T04:52:46.754338889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:52:46.755597 containerd[1627]: time="2024-12-13T04:52:46.754459485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:52:46.755597 containerd[1627]: time="2024-12-13T04:52:46.754489588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:52:46.755597 containerd[1627]: time="2024-12-13T04:52:46.754647911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:52:46.831863 containerd[1627]: time="2024-12-13T04:52:46.830568017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:52:46.831863 containerd[1627]: time="2024-12-13T04:52:46.830659118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:52:46.831863 containerd[1627]: time="2024-12-13T04:52:46.830676763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:52:46.836557 containerd[1627]: time="2024-12-13T04:52:46.834569324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:52:46.836557 containerd[1627]: time="2024-12-13T04:52:46.835180478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p78zt,Uid:cd98ab6d-4a31-400e-9a1c-503114be90cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fec58350be4506277c09cb24a4c65ca1bd9d14a706a975e6335c54704198828\"" Dec 13 04:52:46.843658 containerd[1627]: time="2024-12-13T04:52:46.843600367Z" level=info msg="CreateContainer within sandbox \"7fec58350be4506277c09cb24a4c65ca1bd9d14a706a975e6335c54704198828\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 04:52:46.880140 containerd[1627]: time="2024-12-13T04:52:46.880004444Z" level=info msg="CreateContainer within sandbox \"7fec58350be4506277c09cb24a4c65ca1bd9d14a706a975e6335c54704198828\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a348fed5d5962110fbe1ef628d79c724b4a629fd8859c22d97c6c393061b45d8\"" Dec 13 04:52:46.882449 containerd[1627]: time="2024-12-13T04:52:46.882331858Z" level=info msg="StartContainer for \"a348fed5d5962110fbe1ef628d79c724b4a629fd8859c22d97c6c393061b45d8\"" Dec 13 04:52:46.910093 containerd[1627]: time="2024-12-13T04:52:46.910026085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h8j2c,Uid:ca7bef37-ff4a-44c8-ab45-a3f985966c6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\"" Dec 13 04:52:46.988218 containerd[1627]: time="2024-12-13T04:52:46.988155465Z" level=info msg="StartContainer for \"a348fed5d5962110fbe1ef628d79c724b4a629fd8859c22d97c6c393061b45d8\" returns successfully" Dec 13 04:52:48.817094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2311958270.mount: Deactivated successfully. Dec 13 04:52:49.819560 containerd[1627]: time="2024-12-13T04:52:49.818118196Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:49.821627 containerd[1627]: time="2024-12-13T04:52:49.821578124Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906577" Dec 13 04:52:49.822891 containerd[1627]: time="2024-12-13T04:52:49.822849769Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:52:49.826649 containerd[1627]: time="2024-12-13T04:52:49.826599386Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.076577817s" Dec 13 04:52:49.826814 containerd[1627]: time="2024-12-13T04:52:49.826785147Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 04:52:49.829634 containerd[1627]: time="2024-12-13T04:52:49.829600826Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 04:52:49.836188 containerd[1627]: time="2024-12-13T04:52:49.836148625Z" level=info msg="CreateContainer within sandbox \"13882a88b4b44d1850edbe052b4668bbb26d87a0f28f2bb7e3de417c1874b806\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 04:52:49.861161 containerd[1627]: time="2024-12-13T04:52:49.861079510Z" level=info msg="CreateContainer within sandbox \"13882a88b4b44d1850edbe052b4668bbb26d87a0f28f2bb7e3de417c1874b806\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\"" Dec 13 04:52:49.863205 containerd[1627]: time="2024-12-13T04:52:49.862370484Z" level=info msg="StartContainer for \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\"" Dec 13 04:52:49.960832 containerd[1627]: time="2024-12-13T04:52:49.960663317Z" level=info msg="StartContainer for \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\" returns successfully" Dec 13 04:52:51.060896 kubelet[2926]: I1213 04:52:51.060568 2926 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-p78zt" podStartSLOduration=6.060477875 podStartE2EDuration="6.060477875s" podCreationTimestamp="2024-12-13 04:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:52:47.909091865 +0000 UTC m=+14.464410388" watchObservedRunningTime="2024-12-13 04:52:51.060477875 +0000 UTC m=+17.615796380" Dec 13 04:52:51.060896 kubelet[2926]: I1213 04:52:51.060909 2926 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-xsh5n" podStartSLOduration=1.979481697 podStartE2EDuration="5.060878167s" podCreationTimestamp="2024-12-13 04:52:46 +0000 UTC" firstStartedPulling="2024-12-13 04:52:46.747813545 +0000 UTC m=+13.303132048" lastFinishedPulling="2024-12-13 04:52:49.829210015 +0000 UTC m=+16.384528518" observedRunningTime="2024-12-13 04:52:51.060703936 +0000 UTC m=+17.616022445" watchObservedRunningTime="2024-12-13 04:52:51.060878167 +0000 UTC m=+17.616196679" Dec 13 04:52:57.442748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3415652513.mount: Deactivated successfully. Dec 13 04:53:00.820365 containerd[1627]: time="2024-12-13T04:53:00.820256258Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:53:00.822074 containerd[1627]: time="2024-12-13T04:53:00.821878134Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735323" Dec 13 04:53:00.823704 containerd[1627]: time="2024-12-13T04:53:00.823286033Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 04:53:00.826272 containerd[1627]: time="2024-12-13T04:53:00.826222217Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.99639922s" Dec 13 04:53:00.826487 containerd[1627]: time="2024-12-13T04:53:00.826446707Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 04:53:00.831940 containerd[1627]: time="2024-12-13T04:53:00.831819024Z" level=info msg="CreateContainer within sandbox \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:53:00.923010 containerd[1627]: time="2024-12-13T04:53:00.922860719Z" level=info msg="CreateContainer within sandbox \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950\"" Dec 13 04:53:00.927236 containerd[1627]: time="2024-12-13T04:53:00.927192939Z" level=info msg="StartContainer for \"7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950\"" Dec 13 04:53:01.187433 containerd[1627]: time="2024-12-13T04:53:01.186741507Z" level=info msg="StartContainer for \"7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950\" returns successfully" Dec 13 04:53:01.436102 containerd[1627]: time="2024-12-13T04:53:01.425082422Z" level=info msg="shim disconnected" id=7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950 namespace=k8s.io Dec 13 04:53:01.436545 containerd[1627]: time="2024-12-13T04:53:01.436497966Z" level=warning msg="cleaning up after shim disconnected" id=7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950 namespace=k8s.io Dec 13 04:53:01.437019 containerd[1627]: time="2024-12-13T04:53:01.436673553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:53:01.909908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950-rootfs.mount: Deactivated successfully. Dec 13 04:53:02.041061 containerd[1627]: time="2024-12-13T04:53:02.039672287Z" level=info msg="CreateContainer within sandbox \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:53:02.074065 containerd[1627]: time="2024-12-13T04:53:02.073460876Z" level=info msg="CreateContainer within sandbox \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21\"" Dec 13 04:53:02.075749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2594845822.mount: Deactivated successfully. Dec 13 04:53:02.079043 containerd[1627]: time="2024-12-13T04:53:02.075851138Z" level=info msg="StartContainer for \"1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21\"" Dec 13 04:53:02.182666 containerd[1627]: time="2024-12-13T04:53:02.182102199Z" level=info msg="StartContainer for \"1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21\" returns successfully" Dec 13 04:53:02.198709 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 04:53:02.199449 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 04:53:02.199653 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 04:53:02.208172 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 04:53:02.252642 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 04:53:02.257781 containerd[1627]: time="2024-12-13T04:53:02.257367605Z" level=info msg="shim disconnected" id=1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21 namespace=k8s.io Dec 13 04:53:02.257781 containerd[1627]: time="2024-12-13T04:53:02.257548109Z" level=warning msg="cleaning up after shim disconnected" id=1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21 namespace=k8s.io Dec 13 04:53:02.257781 containerd[1627]: time="2024-12-13T04:53:02.257588642Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:53:02.909863 systemd[1]: run-containerd-runc-k8s.io-1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21-runc.8hd9xu.mount: Deactivated successfully. Dec 13 04:53:02.910121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21-rootfs.mount: Deactivated successfully. Dec 13 04:53:03.047174 containerd[1627]: time="2024-12-13T04:53:03.047111185Z" level=info msg="CreateContainer within sandbox \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:53:03.079252 containerd[1627]: time="2024-12-13T04:53:03.079077432Z" level=info msg="CreateContainer within sandbox \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731\"" Dec 13 04:53:03.080274 containerd[1627]: time="2024-12-13T04:53:03.080184229Z" level=info msg="StartContainer for \"908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731\"" Dec 13 04:53:03.184023 containerd[1627]: time="2024-12-13T04:53:03.182747160Z" level=info msg="StartContainer for \"908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731\" returns successfully" Dec 13 04:53:03.231413 containerd[1627]: time="2024-12-13T04:53:03.231137085Z" level=info msg="shim disconnected" id=908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731 namespace=k8s.io Dec 13 04:53:03.231413 containerd[1627]: time="2024-12-13T04:53:03.231249525Z" level=warning msg="cleaning up after shim disconnected" id=908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731 namespace=k8s.io Dec 13 04:53:03.231413 containerd[1627]: time="2024-12-13T04:53:03.231268388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:53:03.911893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731-rootfs.mount: Deactivated successfully. Dec 13 04:53:04.047576 containerd[1627]: time="2024-12-13T04:53:04.046566923Z" level=info msg="CreateContainer within sandbox \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:53:04.065965 containerd[1627]: time="2024-12-13T04:53:04.065908385Z" level=info msg="CreateContainer within sandbox \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b\"" Dec 13 04:53:04.067011 containerd[1627]: time="2024-12-13T04:53:04.066967286Z" level=info msg="StartContainer for \"50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b\"" Dec 13 04:53:04.197582 containerd[1627]: time="2024-12-13T04:53:04.197384993Z" level=info msg="StartContainer for \"50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b\" returns successfully" Dec 13 04:53:04.222581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b-rootfs.mount: Deactivated successfully. Dec 13 04:53:04.224605 containerd[1627]: time="2024-12-13T04:53:04.224134460Z" level=info msg="shim disconnected" id=50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b namespace=k8s.io Dec 13 04:53:04.224605 containerd[1627]: time="2024-12-13T04:53:04.224398272Z" level=warning msg="cleaning up after shim disconnected" id=50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b namespace=k8s.io Dec 13 04:53:04.224605 containerd[1627]: time="2024-12-13T04:53:04.224425102Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:53:05.056602 containerd[1627]: time="2024-12-13T04:53:05.056511979Z" level=info msg="CreateContainer within sandbox \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:53:05.085901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695636077.mount: Deactivated successfully. Dec 13 04:53:05.096002 containerd[1627]: time="2024-12-13T04:53:05.089606348Z" level=info msg="CreateContainer within sandbox \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\"" Dec 13 04:53:05.096002 containerd[1627]: time="2024-12-13T04:53:05.090758685Z" level=info msg="StartContainer for \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\"" Dec 13 04:53:05.199409 containerd[1627]: time="2024-12-13T04:53:05.199324737Z" level=info msg="StartContainer for \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\" returns successfully" Dec 13 04:53:05.463601 kubelet[2926]: I1213 04:53:05.459363 2926 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 04:53:05.504750 kubelet[2926]: I1213 04:53:05.504670 2926 topology_manager.go:215] "Topology Admit Handler" podUID="39db8077-755a-4079-a32b-2ff3df9cf9f8" podNamespace="kube-system" podName="coredns-76f75df574-x8qcb" Dec 13 04:53:05.510720 kubelet[2926]: I1213 04:53:05.508625 2926 topology_manager.go:215] "Topology Admit Handler" podUID="bec48ccb-e71d-4440-9616-2058f9a31530" podNamespace="kube-system" podName="coredns-76f75df574-rd2wh" Dec 13 04:53:05.524924 kubelet[2926]: I1213 04:53:05.524880 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec48ccb-e71d-4440-9616-2058f9a31530-config-volume\") pod \"coredns-76f75df574-rd2wh\" (UID: \"bec48ccb-e71d-4440-9616-2058f9a31530\") " pod="kube-system/coredns-76f75df574-rd2wh" Dec 13 04:53:05.525819 kubelet[2926]: I1213 04:53:05.525712 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjskl\" (UniqueName: \"kubernetes.io/projected/39db8077-755a-4079-a32b-2ff3df9cf9f8-kube-api-access-vjskl\") pod \"coredns-76f75df574-x8qcb\" (UID: \"39db8077-755a-4079-a32b-2ff3df9cf9f8\") " pod="kube-system/coredns-76f75df574-x8qcb" Dec 13 04:53:05.526254 kubelet[2926]: I1213 04:53:05.526134 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkm7t\" (UniqueName: \"kubernetes.io/projected/bec48ccb-e71d-4440-9616-2058f9a31530-kube-api-access-kkm7t\") pod \"coredns-76f75df574-rd2wh\" (UID: \"bec48ccb-e71d-4440-9616-2058f9a31530\") " pod="kube-system/coredns-76f75df574-rd2wh" Dec 13 04:53:05.526780 kubelet[2926]: I1213 04:53:05.526503 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39db8077-755a-4079-a32b-2ff3df9cf9f8-config-volume\") pod \"coredns-76f75df574-x8qcb\" (UID: \"39db8077-755a-4079-a32b-2ff3df9cf9f8\") " pod="kube-system/coredns-76f75df574-x8qcb" Dec 13 04:53:05.825305 containerd[1627]: time="2024-12-13T04:53:05.823939815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x8qcb,Uid:39db8077-755a-4079-a32b-2ff3df9cf9f8,Namespace:kube-system,Attempt:0,}" Dec 13 04:53:05.826391 containerd[1627]: time="2024-12-13T04:53:05.826359097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rd2wh,Uid:bec48ccb-e71d-4440-9616-2058f9a31530,Namespace:kube-system,Attempt:0,}" Dec 13 04:53:06.092420 systemd[1]: run-containerd-runc-k8s.io-56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e-runc.vjSz1H.mount: Deactivated successfully. Dec 13 04:53:07.776266 systemd-networkd[1257]: cilium_host: Link UP Dec 13 04:53:07.777238 systemd-networkd[1257]: cilium_net: Link UP Dec 13 04:53:07.779983 systemd-networkd[1257]: cilium_net: Gained carrier Dec 13 04:53:07.781273 systemd-networkd[1257]: cilium_host: Gained carrier Dec 13 04:53:07.939331 systemd-networkd[1257]: cilium_vxlan: Link UP Dec 13 04:53:07.939344 systemd-networkd[1257]: cilium_vxlan: Gained carrier Dec 13 04:53:08.232962 systemd-networkd[1257]: cilium_net: Gained IPv6LL Dec 13 04:53:08.569813 kernel: NET: Registered PF_ALG protocol family Dec 13 04:53:08.641215 systemd-networkd[1257]: cilium_host: Gained IPv6LL Dec 13 04:53:09.601128 systemd-networkd[1257]: cilium_vxlan: Gained IPv6LL Dec 13 04:53:09.726699 systemd-networkd[1257]: lxc_health: Link UP Dec 13 04:53:09.734867 systemd-networkd[1257]: lxc_health: Gained carrier Dec 13 04:53:09.940649 systemd-networkd[1257]: lxc9ceb7d2ed359: Link UP Dec 13 04:53:09.950388 kernel: eth0: renamed from tmp8a69b Dec 13 04:53:09.956928 systemd-networkd[1257]: lxc9ceb7d2ed359: Gained carrier Dec 13 04:53:09.988425 systemd-networkd[1257]: lxcd9d7e4746287: Link UP Dec 13 04:53:09.997093 kernel: eth0: renamed from tmp1a82a Dec 13 04:53:10.003701 systemd-networkd[1257]: lxcd9d7e4746287: Gained carrier Dec 13 04:53:10.781721 kubelet[2926]: I1213 04:53:10.781660 2926 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-h8j2c" podStartSLOduration=11.866598595 podStartE2EDuration="25.78154016s" podCreationTimestamp="2024-12-13 04:52:45 +0000 UTC" firstStartedPulling="2024-12-13 04:52:46.912078761 +0000 UTC m=+13.467397264" lastFinishedPulling="2024-12-13 04:53:00.827020331 +0000 UTC m=+27.382338829" observedRunningTime="2024-12-13 04:53:06.083530122 +0000 UTC m=+32.638848649" watchObservedRunningTime="2024-12-13 04:53:10.78154016 +0000 UTC m=+37.336858665" Dec 13 04:53:10.817067 systemd-networkd[1257]: lxc_health: Gained IPv6LL Dec 13 04:53:11.585306 systemd-networkd[1257]: lxcd9d7e4746287: Gained IPv6LL Dec 13 04:53:11.777093 systemd-networkd[1257]: lxc9ceb7d2ed359: Gained IPv6LL Dec 13 04:53:15.908949 containerd[1627]: time="2024-12-13T04:53:15.908385459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:53:15.908949 containerd[1627]: time="2024-12-13T04:53:15.908507364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:53:15.908949 containerd[1627]: time="2024-12-13T04:53:15.908556088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:53:15.913836 containerd[1627]: time="2024-12-13T04:53:15.910297364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:53:15.962092 containerd[1627]: time="2024-12-13T04:53:15.960633276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:53:15.962965 containerd[1627]: time="2024-12-13T04:53:15.962909985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:53:15.964188 containerd[1627]: time="2024-12-13T04:53:15.964120850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:53:15.965953 containerd[1627]: time="2024-12-13T04:53:15.965890070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:53:16.112274 containerd[1627]: time="2024-12-13T04:53:16.112191301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rd2wh,Uid:bec48ccb-e71d-4440-9616-2058f9a31530,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a69bf2b4af4083441ba3fa4b068ef720050cf0050526df447c09901809bb882\"" Dec 13 04:53:16.134076 containerd[1627]: time="2024-12-13T04:53:16.132805567Z" level=info msg="CreateContainer within sandbox \"8a69bf2b4af4083441ba3fa4b068ef720050cf0050526df447c09901809bb882\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 04:53:16.136100 containerd[1627]: time="2024-12-13T04:53:16.136049108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x8qcb,Uid:39db8077-755a-4079-a32b-2ff3df9cf9f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a82aab5bf58f37a6535af1ae20e636c43cd9482d6a3e1ec5a331c2c76dc20d9\"" Dec 13 04:53:16.143707 containerd[1627]: time="2024-12-13T04:53:16.143669377Z" level=info msg="CreateContainer within sandbox \"1a82aab5bf58f37a6535af1ae20e636c43cd9482d6a3e1ec5a331c2c76dc20d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 04:53:16.180165 containerd[1627]: time="2024-12-13T04:53:16.179983322Z" level=info msg="CreateContainer within sandbox \"1a82aab5bf58f37a6535af1ae20e636c43cd9482d6a3e1ec5a331c2c76dc20d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00bde43ac77069f747de5c46c44eb4b100e24137f8e8596a3d52f97734a11b2c\"" Dec 13 04:53:16.181572 containerd[1627]: time="2024-12-13T04:53:16.181513465Z" level=info msg="CreateContainer within sandbox \"8a69bf2b4af4083441ba3fa4b068ef720050cf0050526df447c09901809bb882\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90d632505ac308743489dbfc4590a4f6f514fcc4a086336dd28b80e6460e9233\"" Dec 13 04:53:16.182631 containerd[1627]: time="2024-12-13T04:53:16.182598105Z" level=info msg="StartContainer for \"00bde43ac77069f747de5c46c44eb4b100e24137f8e8596a3d52f97734a11b2c\"" Dec 13 04:53:16.184538 containerd[1627]: time="2024-12-13T04:53:16.184504241Z" level=info msg="StartContainer for \"90d632505ac308743489dbfc4590a4f6f514fcc4a086336dd28b80e6460e9233\"" Dec 13 04:53:16.294649 containerd[1627]: time="2024-12-13T04:53:16.294537190Z" level=info msg="StartContainer for \"90d632505ac308743489dbfc4590a4f6f514fcc4a086336dd28b80e6460e9233\" returns successfully" Dec 13 04:53:16.304867 containerd[1627]: time="2024-12-13T04:53:16.304804085Z" level=info msg="StartContainer for \"00bde43ac77069f747de5c46c44eb4b100e24137f8e8596a3d52f97734a11b2c\" returns successfully" Dec 13 04:53:16.940185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559612717.mount: Deactivated successfully. Dec 13 04:53:17.136676 kubelet[2926]: I1213 04:53:17.134903 2926 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-x8qcb" podStartSLOduration=31.134757177 podStartE2EDuration="31.134757177s" podCreationTimestamp="2024-12-13 04:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:53:17.117983805 +0000 UTC m=+43.673302329" watchObservedRunningTime="2024-12-13 04:53:17.134757177 +0000 UTC m=+43.690075690" Dec 13 04:53:52.248324 systemd[1]: Started sshd@9-10.244.18.230:22-147.75.109.163:37518.service - OpenSSH per-connection server daemon (147.75.109.163:37518). Dec 13 04:53:53.169038 sshd[4299]: Accepted publickey for core from 147.75.109.163 port 37518 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:53:53.171273 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:53:53.185344 systemd-logind[1609]: New session 12 of user core. Dec 13 04:53:53.202042 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 04:53:54.308122 sshd[4299]: pam_unix(sshd:session): session closed for user core Dec 13 04:53:54.312260 systemd[1]: sshd@9-10.244.18.230:22-147.75.109.163:37518.service: Deactivated successfully. Dec 13 04:53:54.317600 systemd-logind[1609]: Session 12 logged out. Waiting for processes to exit. Dec 13 04:53:54.318665 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 04:53:54.321417 systemd-logind[1609]: Removed session 12. Dec 13 04:53:59.461135 systemd[1]: Started sshd@10-10.244.18.230:22-147.75.109.163:54276.service - OpenSSH per-connection server daemon (147.75.109.163:54276). Dec 13 04:54:00.361740 sshd[4314]: Accepted publickey for core from 147.75.109.163 port 54276 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:54:00.364441 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:54:00.375737 systemd-logind[1609]: New session 13 of user core. Dec 13 04:54:00.383341 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 04:54:01.122388 sshd[4314]: pam_unix(sshd:session): session closed for user core Dec 13 04:54:01.133943 systemd[1]: sshd@10-10.244.18.230:22-147.75.109.163:54276.service: Deactivated successfully. Dec 13 04:54:01.139016 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 04:54:01.141004 systemd-logind[1609]: Session 13 logged out. Waiting for processes to exit. Dec 13 04:54:01.143682 systemd-logind[1609]: Removed session 13. Dec 13 04:54:06.275211 systemd[1]: Started sshd@11-10.244.18.230:22-147.75.109.163:32854.service - OpenSSH per-connection server daemon (147.75.109.163:32854). Dec 13 04:54:07.170585 sshd[4329]: Accepted publickey for core from 147.75.109.163 port 32854 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:54:07.173122 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:54:07.181411 systemd-logind[1609]: New session 14 of user core. Dec 13 04:54:07.188608 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 04:54:07.912432 sshd[4329]: pam_unix(sshd:session): session closed for user core Dec 13 04:54:07.917739 systemd[1]: sshd@11-10.244.18.230:22-147.75.109.163:32854.service: Deactivated successfully. Dec 13 04:54:07.922486 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 04:54:07.924514 systemd-logind[1609]: Session 14 logged out. Waiting for processes to exit. Dec 13 04:54:07.926186 systemd-logind[1609]: Removed session 14. Dec 13 04:54:13.063230 systemd[1]: Started sshd@12-10.244.18.230:22-147.75.109.163:32870.service - OpenSSH per-connection server daemon (147.75.109.163:32870). Dec 13 04:54:13.989448 sshd[4345]: Accepted publickey for core from 147.75.109.163 port 32870 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:54:13.992893 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:54:14.011697 systemd-logind[1609]: New session 15 of user core. Dec 13 04:54:14.015513 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 04:54:14.736312 sshd[4345]: pam_unix(sshd:session): session closed for user core Dec 13 04:54:14.743707 systemd[1]: sshd@12-10.244.18.230:22-147.75.109.163:32870.service: Deactivated successfully. Dec 13 04:54:14.748959 systemd-logind[1609]: Session 15 logged out. Waiting for processes to exit. Dec 13 04:54:14.749222 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 04:54:14.752069 systemd-logind[1609]: Removed session 15. Dec 13 04:54:14.888470 systemd[1]: Started sshd@13-10.244.18.230:22-147.75.109.163:32872.service - OpenSSH per-connection server daemon (147.75.109.163:32872). Dec 13 04:54:15.773310 sshd[4361]: Accepted publickey for core from 147.75.109.163 port 32872 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:54:15.775886 sshd[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:54:15.784711 systemd-logind[1609]: New session 16 of user core. Dec 13 04:54:15.789511 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 04:54:16.606167 sshd[4361]: pam_unix(sshd:session): session closed for user core Dec 13 04:54:16.622226 systemd[1]: sshd@13-10.244.18.230:22-147.75.109.163:32872.service: Deactivated successfully. Dec 13 04:54:16.632398 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 04:54:16.634265 systemd-logind[1609]: Session 16 logged out. Waiting for processes to exit. Dec 13 04:54:16.636111 systemd-logind[1609]: Removed session 16. Dec 13 04:54:16.754263 systemd[1]: Started sshd@14-10.244.18.230:22-147.75.109.163:47426.service - OpenSSH per-connection server daemon (147.75.109.163:47426). Dec 13 04:54:17.658924 sshd[4373]: Accepted publickey for core from 147.75.109.163 port 47426 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:54:17.661536 sshd[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:54:17.670599 systemd-logind[1609]: New session 17 of user core. Dec 13 04:54:17.677459 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 04:54:18.412448 sshd[4373]: pam_unix(sshd:session): session closed for user core Dec 13 04:54:18.416656 systemd[1]: sshd@14-10.244.18.230:22-147.75.109.163:47426.service: Deactivated successfully. Dec 13 04:54:18.421681 systemd-logind[1609]: Session 17 logged out. Waiting for processes to exit. Dec 13 04:54:18.422649 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 04:54:18.424624 systemd-logind[1609]: Removed session 17. Dec 13 04:54:23.563153 systemd[1]: Started sshd@15-10.244.18.230:22-147.75.109.163:47432.service - OpenSSH per-connection server daemon (147.75.109.163:47432). Dec 13 04:54:24.481702 sshd[4388]: Accepted publickey for core from 147.75.109.163 port 47432 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:54:24.483892 sshd[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:54:24.492737 systemd-logind[1609]: New session 18 of user core. Dec 13 04:54:24.500372 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 04:54:25.203147 sshd[4388]: pam_unix(sshd:session): session closed for user core Dec 13 04:54:25.208154 systemd[1]: sshd@15-10.244.18.230:22-147.75.109.163:47432.service: Deactivated successfully. Dec 13 04:54:25.220581 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 04:54:25.224976 systemd-logind[1609]: Session 18 logged out. Waiting for processes to exit. Dec 13 04:54:25.226604 systemd-logind[1609]: Removed session 18. Dec 13 04:54:30.357286 systemd[1]: Started sshd@16-10.244.18.230:22-147.75.109.163:53996.service - OpenSSH per-connection server daemon (147.75.109.163:53996). Dec 13 04:54:31.244934 sshd[4402]: Accepted publickey for core from 147.75.109.163 port 53996 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:54:31.247393 sshd[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:54:31.255177 systemd-logind[1609]: New session 19 of user core. Dec 13 04:54:31.263343 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 04:54:31.958075 sshd[4402]: pam_unix(sshd:session): session closed for user core Dec 13 04:54:31.963428 systemd[1]: sshd@16-10.244.18.230:22-147.75.109.163:53996.service: Deactivated successfully. Dec 13 04:54:31.969142 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 04:54:31.970803 systemd-logind[1609]: Session 19 logged out. Waiting for processes to exit. Dec 13 04:54:31.972577 systemd-logind[1609]: Removed session 19. Dec 13 04:54:32.107134 systemd[1]: Started sshd@17-10.244.18.230:22-147.75.109.163:54012.service - OpenSSH per-connection server daemon (147.75.109.163:54012). Dec 13 04:54:33.004809 sshd[4416]: Accepted publickey for core from 147.75.109.163 port 54012 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:54:33.007045 sshd[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:54:33.015924 systemd-logind[1609]: New session 20 of user core. Dec 13 04:54:33.021314 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 04:54:33.987052 sshd[4416]: pam_unix(sshd:session): session closed for user core Dec 13 04:54:33.994646 systemd[1]: sshd@17-10.244.18.230:22-147.75.109.163:54012.service: Deactivated successfully. Dec 13 04:54:33.999710 systemd-logind[1609]: Session 20 logged out. Waiting for processes to exit. Dec 13 04:54:34.000181 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 04:54:34.003179 systemd-logind[1609]: Removed session 20. Dec 13 04:54:34.137902 systemd[1]: Started sshd@18-10.244.18.230:22-147.75.109.163:54014.service - OpenSSH per-connection server daemon (147.75.109.163:54014). Dec 13 04:54:35.049451 sshd[4430]: Accepted publickey for core from 147.75.109.163 port 54014 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:54:35.052795 sshd[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:54:35.060279 systemd-logind[1609]: New session 21 of user core. Dec 13 04:54:35.066003 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 04:54:37.943784 sshd[4430]: pam_unix(sshd:session): session closed for user core Dec 13 04:54:37.956223 systemd[1]: sshd@18-10.244.18.230:22-147.75.109.163:54014.service: Deactivated successfully. Dec 13 04:54:37.962569 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 04:54:37.965748 systemd-logind[1609]: Session 21 logged out. Waiting for processes to exit. Dec 13 04:54:37.967687 systemd-logind[1609]: Removed session 21. Dec 13 04:54:38.093076 systemd[1]: Started sshd@19-10.244.18.230:22-147.75.109.163:54172.service - OpenSSH per-connection server daemon (147.75.109.163:54172). Dec 13 04:54:38.986555 sshd[4449]: Accepted publickey for core from 147.75.109.163 port 54172 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:54:38.988802 sshd[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:54:38.997373 systemd-logind[1609]: New session 22 of user core. Dec 13 04:54:39.006808 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 04:54:39.893728 sshd[4449]: pam_unix(sshd:session): session closed for user core Dec 13 04:54:39.899607 systemd[1]: sshd@19-10.244.18.230:22-147.75.109.163:54172.service: Deactivated successfully. Dec 13 04:54:39.903669 systemd-logind[1609]: Session 22 logged out. Waiting for processes to exit. Dec 13 04:54:39.904627 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 04:54:39.907327 systemd-logind[1609]: Removed session 22. Dec 13 04:54:40.045254 systemd[1]: Started sshd@20-10.244.18.230:22-147.75.109.163:54180.service - OpenSSH per-connection server daemon (147.75.109.163:54180). Dec 13 04:54:40.942257 sshd[4461]: Accepted publickey for core from 147.75.109.163 port 54180 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:54:40.946352 sshd[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:54:40.954960 systemd-logind[1609]: New session 23 of user core. Dec 13 04:54:40.962347 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 04:54:41.668139 sshd[4461]: pam_unix(sshd:session): session closed for user core Dec 13 04:54:41.673410 systemd[1]: sshd@20-10.244.18.230:22-147.75.109.163:54180.service: Deactivated successfully. Dec 13 04:54:41.678188 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 04:54:41.679633 systemd-logind[1609]: Session 23 logged out. Waiting for processes to exit. Dec 13 04:54:41.681190 systemd-logind[1609]: Removed session 23. Dec 13 04:54:46.824172 systemd[1]: Started sshd@21-10.244.18.230:22-147.75.109.163:52710.service - OpenSSH per-connection server daemon (147.75.109.163:52710). Dec 13 04:54:47.720084 sshd[4478]: Accepted publickey for core from 147.75.109.163 port 52710 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:54:47.722473 sshd[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:54:47.731182 systemd-logind[1609]: New session 24 of user core. Dec 13 04:54:47.737699 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 04:54:48.428324 sshd[4478]: pam_unix(sshd:session): session closed for user core Dec 13 04:54:48.437257 systemd[1]: sshd@21-10.244.18.230:22-147.75.109.163:52710.service: Deactivated successfully. Dec 13 04:54:48.445638 systemd-logind[1609]: Session 24 logged out. Waiting for processes to exit. Dec 13 04:54:48.447951 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 04:54:48.454893 systemd-logind[1609]: Removed session 24. Dec 13 04:54:53.577202 systemd[1]: Started sshd@22-10.244.18.230:22-147.75.109.163:52716.service - OpenSSH per-connection server daemon (147.75.109.163:52716). Dec 13 04:54:54.472245 sshd[4494]: Accepted publickey for core from 147.75.109.163 port 52716 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:54:54.474933 sshd[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:54:54.484526 systemd-logind[1609]: New session 25 of user core. Dec 13 04:54:54.489584 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 04:54:55.188221 sshd[4494]: pam_unix(sshd:session): session closed for user core Dec 13 04:54:55.196023 systemd-logind[1609]: Session 25 logged out. Waiting for processes to exit. Dec 13 04:54:55.196797 systemd[1]: sshd@22-10.244.18.230:22-147.75.109.163:52716.service: Deactivated successfully. Dec 13 04:54:55.203208 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 04:54:55.205130 systemd-logind[1609]: Removed session 25. Dec 13 04:55:00.348290 systemd[1]: Started sshd@23-10.244.18.230:22-147.75.109.163:38232.service - OpenSSH per-connection server daemon (147.75.109.163:38232). Dec 13 04:55:01.236625 sshd[4508]: Accepted publickey for core from 147.75.109.163 port 38232 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:55:01.239738 sshd[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:55:01.248994 systemd-logind[1609]: New session 26 of user core. Dec 13 04:55:01.258901 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 04:55:01.976420 sshd[4508]: pam_unix(sshd:session): session closed for user core Dec 13 04:55:01.980409 systemd[1]: sshd@23-10.244.18.230:22-147.75.109.163:38232.service: Deactivated successfully. Dec 13 04:55:01.987851 systemd-logind[1609]: Session 26 logged out. Waiting for processes to exit. Dec 13 04:55:01.989044 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 04:55:01.990702 systemd-logind[1609]: Removed session 26. Dec 13 04:55:02.124154 systemd[1]: Started sshd@24-10.244.18.230:22-147.75.109.163:38242.service - OpenSSH per-connection server daemon (147.75.109.163:38242). Dec 13 04:55:03.025606 sshd[4521]: Accepted publickey for core from 147.75.109.163 port 38242 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:55:03.027893 sshd[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:55:03.034984 systemd-logind[1609]: New session 27 of user core. Dec 13 04:55:03.042164 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 04:55:05.656384 kubelet[2926]: I1213 04:55:05.654834 2926 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rd2wh" podStartSLOduration=139.654665118 podStartE2EDuration="2m19.654665118s" podCreationTimestamp="2024-12-13 04:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:53:17.154450761 +0000 UTC m=+43.709769286" watchObservedRunningTime="2024-12-13 04:55:05.654665118 +0000 UTC m=+152.209983623" Dec 13 04:55:05.745568 systemd[1]: run-containerd-runc-k8s.io-56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e-runc.1KjYPM.mount: Deactivated successfully. Dec 13 04:55:05.773589 containerd[1627]: time="2024-12-13T04:55:05.773452554Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:55:05.807759 containerd[1627]: time="2024-12-13T04:55:05.807349231Z" level=info msg="StopContainer for \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\" with timeout 2 (s)" Dec 13 04:55:05.807759 containerd[1627]: time="2024-12-13T04:55:05.807698265Z" level=info msg="StopContainer for \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\" with timeout 30 (s)" Dec 13 04:55:05.811439 containerd[1627]: time="2024-12-13T04:55:05.811386605Z" level=info msg="Stop container \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\" with signal terminated" Dec 13 04:55:05.812506 containerd[1627]: time="2024-12-13T04:55:05.812473063Z" level=info msg="Stop container \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\" with signal terminated" Dec 13 04:55:05.841703 systemd-networkd[1257]: lxc_health: Link DOWN Dec 13 04:55:05.841717 systemd-networkd[1257]: lxc_health: Lost carrier Dec 13 04:55:05.900780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7-rootfs.mount: Deactivated successfully. Dec 13 04:55:05.909663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e-rootfs.mount: Deactivated successfully. Dec 13 04:55:05.912950 containerd[1627]: time="2024-12-13T04:55:05.912632712Z" level=info msg="shim disconnected" id=6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7 namespace=k8s.io Dec 13 04:55:05.913191 containerd[1627]: time="2024-12-13T04:55:05.912907909Z" level=warning msg="cleaning up after shim disconnected" id=6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7 namespace=k8s.io Dec 13 04:55:05.913191 containerd[1627]: time="2024-12-13T04:55:05.913135936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:05.917139 containerd[1627]: time="2024-12-13T04:55:05.917074001Z" level=info msg="shim disconnected" id=56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e namespace=k8s.io Dec 13 04:55:05.917440 containerd[1627]: time="2024-12-13T04:55:05.917136712Z" level=warning msg="cleaning up after shim disconnected" id=56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e namespace=k8s.io Dec 13 04:55:05.917440 containerd[1627]: time="2024-12-13T04:55:05.917157277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:05.942733 containerd[1627]: time="2024-12-13T04:55:05.942644855Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:55:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 04:55:05.944463 containerd[1627]: time="2024-12-13T04:55:05.944426061Z" level=info msg="StopContainer for \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\" returns successfully" Dec 13 04:55:05.946736 containerd[1627]: time="2024-12-13T04:55:05.946173805Z" level=info msg="StopContainer for \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\" returns successfully" Dec 13 04:55:05.946736 containerd[1627]: time="2024-12-13T04:55:05.946522156Z" level=info msg="StopPodSandbox for \"13882a88b4b44d1850edbe052b4668bbb26d87a0f28f2bb7e3de417c1874b806\"" Dec 13 04:55:05.946736 containerd[1627]: time="2024-12-13T04:55:05.946582078Z" level=info msg="Container to stop \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:55:05.949279 containerd[1627]: time="2024-12-13T04:55:05.949232520Z" level=info msg="StopPodSandbox for \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\"" Dec 13 04:55:05.949437 containerd[1627]: time="2024-12-13T04:55:05.949407002Z" level=info msg="Container to stop \"1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:55:05.949573 containerd[1627]: time="2024-12-13T04:55:05.949531804Z" level=info msg="Container to stop \"908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:55:05.949724 containerd[1627]: time="2024-12-13T04:55:05.949697497Z" level=info msg="Container to stop \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:55:05.949877 containerd[1627]: time="2024-12-13T04:55:05.949850086Z" level=info msg="Container to stop \"7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:55:05.950352 containerd[1627]: time="2024-12-13T04:55:05.949963807Z" level=info msg="Container to stop \"50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:55:05.951075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13882a88b4b44d1850edbe052b4668bbb26d87a0f28f2bb7e3de417c1874b806-shm.mount: Deactivated successfully. Dec 13 04:55:06.004297 containerd[1627]: time="2024-12-13T04:55:06.003994018Z" level=info msg="shim disconnected" id=a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e namespace=k8s.io Dec 13 04:55:06.004297 containerd[1627]: time="2024-12-13T04:55:06.004095020Z" level=warning msg="cleaning up after shim disconnected" id=a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e namespace=k8s.io Dec 13 04:55:06.004297 containerd[1627]: time="2024-12-13T04:55:06.004111991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:06.030137 containerd[1627]: time="2024-12-13T04:55:06.029923425Z" level=info msg="shim disconnected" id=13882a88b4b44d1850edbe052b4668bbb26d87a0f28f2bb7e3de417c1874b806 namespace=k8s.io Dec 13 04:55:06.030137 containerd[1627]: time="2024-12-13T04:55:06.030010873Z" level=warning msg="cleaning up after shim disconnected" id=13882a88b4b44d1850edbe052b4668bbb26d87a0f28f2bb7e3de417c1874b806 namespace=k8s.io Dec 13 04:55:06.030137 containerd[1627]: time="2024-12-13T04:55:06.030028384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:06.037786 containerd[1627]: time="2024-12-13T04:55:06.036889091Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:55:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 04:55:06.038596 containerd[1627]: time="2024-12-13T04:55:06.038562958Z" level=info msg="TearDown network for sandbox \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\" successfully" Dec 13 04:55:06.038721 containerd[1627]: time="2024-12-13T04:55:06.038695190Z" level=info msg="StopPodSandbox for \"a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e\" returns successfully" Dec 13 04:55:06.070188 containerd[1627]: time="2024-12-13T04:55:06.068826434Z" level=info msg="TearDown network for sandbox \"13882a88b4b44d1850edbe052b4668bbb26d87a0f28f2bb7e3de417c1874b806\" successfully" Dec 13 04:55:06.070188 containerd[1627]: time="2024-12-13T04:55:06.068887644Z" level=info msg="StopPodSandbox for \"13882a88b4b44d1850edbe052b4668bbb26d87a0f28f2bb7e3de417c1874b806\" returns successfully" Dec 13 04:55:06.140434 kubelet[2926]: I1213 04:55:06.140150 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cilium-run\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.140434 kubelet[2926]: I1213 04:55:06.140250 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptsdt\" (UniqueName: \"kubernetes.io/projected/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-kube-api-access-ptsdt\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.140434 kubelet[2926]: I1213 04:55:06.140288 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cilium-cgroup\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.140434 kubelet[2926]: I1213 04:55:06.140321 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-bpf-maps\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.140434 kubelet[2926]: I1213 04:55:06.140367 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-clustermesh-secrets\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.140434 kubelet[2926]: I1213 04:55:06.140398 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-xtables-lock\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.140958 kubelet[2926]: I1213 04:55:06.140425 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-host-proc-sys-kernel\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.140958 kubelet[2926]: I1213 04:55:06.140456 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-hubble-tls\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.140958 kubelet[2926]: I1213 04:55:06.140491 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-hostproc\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.140958 kubelet[2926]: I1213 04:55:06.140520 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-lib-modules\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.140958 kubelet[2926]: I1213 04:55:06.140608 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cni-path\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.140958 kubelet[2926]: I1213 04:55:06.140653 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-host-proc-sys-net\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.141248 kubelet[2926]: I1213 04:55:06.140688 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7kwx\" (UniqueName: \"kubernetes.io/projected/f8759128-e5eb-4636-8732-0976cf9da43d-kube-api-access-x7kwx\") pod \"f8759128-e5eb-4636-8732-0976cf9da43d\" (UID: \"f8759128-e5eb-4636-8732-0976cf9da43d\") " Dec 13 04:55:06.141248 kubelet[2926]: I1213 04:55:06.140723 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cilium-config-path\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.141248 kubelet[2926]: I1213 04:55:06.140750 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-etc-cni-netd\") pod \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\" (UID: \"ca7bef37-ff4a-44c8-ab45-a3f985966c6b\") " Dec 13 04:55:06.141248 kubelet[2926]: I1213 04:55:06.140825 2926 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8759128-e5eb-4636-8732-0976cf9da43d-cilium-config-path\") pod \"f8759128-e5eb-4636-8732-0976cf9da43d\" (UID: \"f8759128-e5eb-4636-8732-0976cf9da43d\") " Dec 13 04:55:06.143446 kubelet[2926]: I1213 04:55:06.140312 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.147429 kubelet[2926]: I1213 04:55:06.146996 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8759128-e5eb-4636-8732-0976cf9da43d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f8759128-e5eb-4636-8732-0976cf9da43d" (UID: "f8759128-e5eb-4636-8732-0976cf9da43d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:55:06.147429 kubelet[2926]: I1213 04:55:06.147098 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-hostproc" (OuterVolumeSpecName: "hostproc") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.147429 kubelet[2926]: I1213 04:55:06.147169 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.147429 kubelet[2926]: I1213 04:55:06.147242 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cni-path" (OuterVolumeSpecName: "cni-path") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.147429 kubelet[2926]: I1213 04:55:06.147275 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.158510 kubelet[2926]: I1213 04:55:06.158415 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:55:06.158510 kubelet[2926]: I1213 04:55:06.158486 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8759128-e5eb-4636-8732-0976cf9da43d-kube-api-access-x7kwx" (OuterVolumeSpecName: "kube-api-access-x7kwx") pod "f8759128-e5eb-4636-8732-0976cf9da43d" (UID: "f8759128-e5eb-4636-8732-0976cf9da43d"). InnerVolumeSpecName "kube-api-access-x7kwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:55:06.161934 kubelet[2926]: I1213 04:55:06.161720 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-kube-api-access-ptsdt" (OuterVolumeSpecName: "kube-api-access-ptsdt") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "kube-api-access-ptsdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:55:06.161934 kubelet[2926]: I1213 04:55:06.161834 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.161934 kubelet[2926]: I1213 04:55:06.161871 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.163324 kubelet[2926]: I1213 04:55:06.163155 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:55:06.163324 kubelet[2926]: I1213 04:55:06.163209 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.163324 kubelet[2926]: I1213 04:55:06.163249 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.163324 kubelet[2926]: I1213 04:55:06.163280 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:55:06.166828 kubelet[2926]: I1213 04:55:06.166736 2926 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ca7bef37-ff4a-44c8-ab45-a3f985966c6b" (UID: "ca7bef37-ff4a-44c8-ab45-a3f985966c6b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:55:06.242077 kubelet[2926]: I1213 04:55:06.241996 2926 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cilium-run\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242077 kubelet[2926]: I1213 04:55:06.242066 2926 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ptsdt\" (UniqueName: \"kubernetes.io/projected/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-kube-api-access-ptsdt\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242367 kubelet[2926]: I1213 04:55:06.242098 2926 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cilium-cgroup\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242367 kubelet[2926]: I1213 04:55:06.242126 2926 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-bpf-maps\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242367 kubelet[2926]: I1213 04:55:06.242145 2926 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-clustermesh-secrets\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242367 kubelet[2926]: I1213 04:55:06.242169 2926 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-xtables-lock\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242367 kubelet[2926]: I1213 04:55:06.242191 2926 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-host-proc-sys-kernel\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242367 kubelet[2926]: I1213 04:55:06.242208 2926 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-hubble-tls\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242367 kubelet[2926]: I1213 04:55:06.242234 2926 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-hostproc\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242367 kubelet[2926]: I1213 04:55:06.242251 2926 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-lib-modules\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242981 kubelet[2926]: I1213 04:55:06.242266 2926 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cni-path\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242981 kubelet[2926]: I1213 04:55:06.242283 2926 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-host-proc-sys-net\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242981 kubelet[2926]: I1213 04:55:06.242300 2926 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x7kwx\" (UniqueName: \"kubernetes.io/projected/f8759128-e5eb-4636-8732-0976cf9da43d-kube-api-access-x7kwx\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242981 kubelet[2926]: I1213 04:55:06.242317 2926 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-cilium-config-path\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242981 kubelet[2926]: I1213 04:55:06.242334 2926 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca7bef37-ff4a-44c8-ab45-a3f985966c6b-etc-cni-netd\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.242981 kubelet[2926]: I1213 04:55:06.242365 2926 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8759128-e5eb-4636-8732-0976cf9da43d-cilium-config-path\") on node \"srv-wy7pj.gb1.brightbox.com\" DevicePath \"\"" Dec 13 04:55:06.429917 kubelet[2926]: I1213 04:55:06.427449 2926 scope.go:117] "RemoveContainer" containerID="6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7" Dec 13 04:55:06.434693 containerd[1627]: time="2024-12-13T04:55:06.434610758Z" level=info msg="RemoveContainer for \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\"" Dec 13 04:55:06.440511 containerd[1627]: time="2024-12-13T04:55:06.440444837Z" level=info msg="RemoveContainer for \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\" returns successfully" Dec 13 04:55:06.442445 kubelet[2926]: I1213 04:55:06.442399 2926 scope.go:117] "RemoveContainer" containerID="6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7" Dec 13 04:55:06.459337 containerd[1627]: time="2024-12-13T04:55:06.443210238Z" level=error msg="ContainerStatus for \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\": not found" Dec 13 04:55:06.469797 kubelet[2926]: E1213 04:55:06.468863 2926 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\": not found" containerID="6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7" Dec 13 04:55:06.470738 kubelet[2926]: I1213 04:55:06.470688 2926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7"} err="failed to get container status \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c003a4a36623d9c1cc7d8ca76bc74abfce45d30a776aa2a9c3bcb7fcea53ee7\": not found" Dec 13 04:55:06.470988 kubelet[2926]: I1213 04:55:06.470760 2926 scope.go:117] "RemoveContainer" containerID="56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e" Dec 13 04:55:06.475643 containerd[1627]: time="2024-12-13T04:55:06.475118466Z" level=info msg="RemoveContainer for \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\"" Dec 13 04:55:06.480163 containerd[1627]: time="2024-12-13T04:55:06.480103703Z" level=info msg="RemoveContainer for \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\" returns successfully" Dec 13 04:55:06.481947 kubelet[2926]: I1213 04:55:06.481701 2926 scope.go:117] "RemoveContainer" containerID="50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b" Dec 13 04:55:06.488238 containerd[1627]: time="2024-12-13T04:55:06.488182410Z" level=info msg="RemoveContainer for \"50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b\"" Dec 13 04:55:06.495041 containerd[1627]: time="2024-12-13T04:55:06.494981288Z" level=info msg="RemoveContainer for \"50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b\" returns successfully" Dec 13 04:55:06.495493 kubelet[2926]: I1213 04:55:06.495339 2926 scope.go:117] "RemoveContainer" containerID="908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731" Dec 13 04:55:06.503352 containerd[1627]: time="2024-12-13T04:55:06.502804043Z" level=info msg="RemoveContainer for \"908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731\"" Dec 13 04:55:06.509522 containerd[1627]: time="2024-12-13T04:55:06.509455999Z" level=info msg="RemoveContainer for \"908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731\" returns successfully" Dec 13 04:55:06.510981 kubelet[2926]: I1213 04:55:06.510943 2926 scope.go:117] "RemoveContainer" containerID="1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21" Dec 13 04:55:06.514832 containerd[1627]: time="2024-12-13T04:55:06.514692443Z" level=info msg="RemoveContainer for \"1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21\"" Dec 13 04:55:06.518793 containerd[1627]: time="2024-12-13T04:55:06.518682766Z" level=info msg="RemoveContainer for \"1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21\" returns successfully" Dec 13 04:55:06.519438 kubelet[2926]: I1213 04:55:06.519405 2926 scope.go:117] "RemoveContainer" containerID="7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950" Dec 13 04:55:06.521426 containerd[1627]: time="2024-12-13T04:55:06.521240755Z" level=info msg="RemoveContainer for \"7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950\"" Dec 13 04:55:06.524817 containerd[1627]: time="2024-12-13T04:55:06.524729135Z" level=info msg="RemoveContainer for \"7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950\" returns successfully" Dec 13 04:55:06.525195 kubelet[2926]: I1213 04:55:06.525064 2926 scope.go:117] "RemoveContainer" containerID="56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e" Dec 13 04:55:06.525818 containerd[1627]: time="2024-12-13T04:55:06.525521681Z" level=error msg="ContainerStatus for \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\": not found" Dec 13 04:55:06.525945 kubelet[2926]: E1213 04:55:06.525817 2926 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\": not found" containerID="56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e" Dec 13 04:55:06.525945 kubelet[2926]: I1213 04:55:06.525866 2926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e"} err="failed to get container status \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\": rpc error: code = NotFound desc = an error occurred when try to find container \"56c04aeba416f83499dd0992812af9222d4b6300f28fcbfdeb878a09869a6d5e\": not found" Dec 13 04:55:06.525945 kubelet[2926]: I1213 04:55:06.525887 2926 scope.go:117] "RemoveContainer" containerID="50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b" Dec 13 04:55:06.526562 containerd[1627]: time="2024-12-13T04:55:06.526350907Z" level=error msg="ContainerStatus for \"50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b\": not found" Dec 13 04:55:06.528140 kubelet[2926]: E1213 04:55:06.526732 2926 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b\": not found" containerID="50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b" Dec 13 04:55:06.528140 kubelet[2926]: I1213 04:55:06.526849 2926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b"} err="failed to get container status \"50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b\": rpc error: code = NotFound desc = an error occurred when try to find container \"50f301f96c89cfcab7a20dd5130415c2fabbc3f274d1206c8a3041f6c711906b\": not found" Dec 13 04:55:06.528140 kubelet[2926]: I1213 04:55:06.526898 2926 scope.go:117] "RemoveContainer" containerID="908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731" Dec 13 04:55:06.528140 kubelet[2926]: E1213 04:55:06.527620 2926 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731\": not found" containerID="908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731" Dec 13 04:55:06.528140 kubelet[2926]: I1213 04:55:06.527657 2926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731"} err="failed to get container status \"908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731\": rpc error: code = NotFound desc = an error occurred when try to find container \"908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731\": not found" Dec 13 04:55:06.528140 kubelet[2926]: I1213 04:55:06.527706 2926 scope.go:117] "RemoveContainer" containerID="1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21" Dec 13 04:55:06.529984 containerd[1627]: time="2024-12-13T04:55:06.527314037Z" level=error msg="ContainerStatus for \"908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"908da26d44b670b1fb169080c668c834caee9317e882e1cbc63c3cddd8e87731\": not found" Dec 13 04:55:06.529984 containerd[1627]: time="2024-12-13T04:55:06.528143023Z" level=error msg="ContainerStatus for \"1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21\": not found" Dec 13 04:55:06.529984 containerd[1627]: time="2024-12-13T04:55:06.529140795Z" level=error msg="ContainerStatus for \"7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950\": not found" Dec 13 04:55:06.530140 kubelet[2926]: E1213 04:55:06.528412 2926 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21\": not found" containerID="1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21" Dec 13 04:55:06.530140 kubelet[2926]: I1213 04:55:06.528484 2926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21"} err="failed to get container status \"1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c18ca1c7be5fb18072b12e72cb34ad570e703d12a374fa29b171a14b0a7af21\": not found" Dec 13 04:55:06.530140 kubelet[2926]: I1213 04:55:06.528504 2926 scope.go:117] "RemoveContainer" containerID="7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950" Dec 13 04:55:06.530514 kubelet[2926]: E1213 04:55:06.530378 2926 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950\": not found" containerID="7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950" Dec 13 04:55:06.530514 kubelet[2926]: I1213 04:55:06.530423 2926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950"} err="failed to get container status \"7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b95bafd9f318c59cf52ea950d108b60550c89a83edbc86a9e085673622fb950\": not found" Dec 13 04:55:06.735791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e-rootfs.mount: Deactivated successfully. Dec 13 04:55:06.736070 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a841e2d0d8e0a8ca6536cd79d411a157a0d045d57327c371da328589940d3a6e-shm.mount: Deactivated successfully. Dec 13 04:55:06.736261 systemd[1]: var-lib-kubelet-pods-ca7bef37\x2dff4a\x2d44c8\x2dab45\x2da3f985966c6b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dptsdt.mount: Deactivated successfully. Dec 13 04:55:06.736472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13882a88b4b44d1850edbe052b4668bbb26d87a0f28f2bb7e3de417c1874b806-rootfs.mount: Deactivated successfully. Dec 13 04:55:06.736694 systemd[1]: var-lib-kubelet-pods-f8759128\x2de5eb\x2d4636\x2d8732\x2d0976cf9da43d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx7kwx.mount: Deactivated successfully. Dec 13 04:55:06.738041 systemd[1]: var-lib-kubelet-pods-ca7bef37\x2dff4a\x2d44c8\x2dab45\x2da3f985966c6b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:55:06.738227 systemd[1]: var-lib-kubelet-pods-ca7bef37\x2dff4a\x2d44c8\x2dab45\x2da3f985966c6b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:55:07.734271 sshd[4521]: pam_unix(sshd:session): session closed for user core Dec 13 04:55:07.743152 systemd[1]: sshd@24-10.244.18.230:22-147.75.109.163:38242.service: Deactivated successfully. Dec 13 04:55:07.747511 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 04:55:07.749502 systemd-logind[1609]: Session 27 logged out. Waiting for processes to exit. Dec 13 04:55:07.752199 systemd-logind[1609]: Removed session 27. Dec 13 04:55:07.800850 kubelet[2926]: I1213 04:55:07.799967 2926 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ca7bef37-ff4a-44c8-ab45-a3f985966c6b" path="/var/lib/kubelet/pods/ca7bef37-ff4a-44c8-ab45-a3f985966c6b/volumes" Dec 13 04:55:07.802397 kubelet[2926]: I1213 04:55:07.802351 2926 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f8759128-e5eb-4636-8732-0976cf9da43d" path="/var/lib/kubelet/pods/f8759128-e5eb-4636-8732-0976cf9da43d/volumes" Dec 13 04:55:07.882137 systemd[1]: Started sshd@25-10.244.18.230:22-147.75.109.163:54106.service - OpenSSH per-connection server daemon (147.75.109.163:54106). Dec 13 04:55:08.792134 sshd[4691]: Accepted publickey for core from 147.75.109.163 port 54106 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:55:08.795126 sshd[4691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:55:08.807281 systemd-logind[1609]: New session 28 of user core. Dec 13 04:55:08.810173 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 04:55:08.996123 kubelet[2926]: E1213 04:55:08.996027 2926 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:55:10.182511 kubelet[2926]: I1213 04:55:10.181806 2926 topology_manager.go:215] "Topology Admit Handler" podUID="debb1e25-3a83-46ad-b367-061391aae6f8" podNamespace="kube-system" podName="cilium-99djs" Dec 13 04:55:10.191051 kubelet[2926]: E1213 04:55:10.190569 2926 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f8759128-e5eb-4636-8732-0976cf9da43d" containerName="cilium-operator" Dec 13 04:55:10.191051 kubelet[2926]: E1213 04:55:10.190615 2926 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ca7bef37-ff4a-44c8-ab45-a3f985966c6b" containerName="mount-bpf-fs" Dec 13 04:55:10.191051 kubelet[2926]: E1213 04:55:10.190632 2926 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ca7bef37-ff4a-44c8-ab45-a3f985966c6b" containerName="clean-cilium-state" Dec 13 04:55:10.191051 kubelet[2926]: E1213 04:55:10.190644 2926 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ca7bef37-ff4a-44c8-ab45-a3f985966c6b" containerName="cilium-agent" Dec 13 04:55:10.191051 kubelet[2926]: E1213 04:55:10.190656 2926 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ca7bef37-ff4a-44c8-ab45-a3f985966c6b" containerName="mount-cgroup" Dec 13 04:55:10.191051 kubelet[2926]: E1213 04:55:10.190675 2926 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ca7bef37-ff4a-44c8-ab45-a3f985966c6b" containerName="apply-sysctl-overwrites" Dec 13 04:55:10.191051 kubelet[2926]: I1213 04:55:10.190756 2926 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8759128-e5eb-4636-8732-0976cf9da43d" containerName="cilium-operator" Dec 13 04:55:10.191051 kubelet[2926]: I1213 04:55:10.190786 2926 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca7bef37-ff4a-44c8-ab45-a3f985966c6b" containerName="cilium-agent" Dec 13 04:55:10.276611 kubelet[2926]: I1213 04:55:10.276559 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/debb1e25-3a83-46ad-b367-061391aae6f8-cilium-config-path\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.277750 kubelet[2926]: I1213 04:55:10.277709 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/debb1e25-3a83-46ad-b367-061391aae6f8-cilium-cgroup\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.277876 kubelet[2926]: I1213 04:55:10.277775 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/debb1e25-3a83-46ad-b367-061391aae6f8-clustermesh-secrets\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.277876 kubelet[2926]: I1213 04:55:10.277815 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/debb1e25-3a83-46ad-b367-061391aae6f8-bpf-maps\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.277876 kubelet[2926]: I1213 04:55:10.277857 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/debb1e25-3a83-46ad-b367-061391aae6f8-hostproc\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.278043 kubelet[2926]: I1213 04:55:10.277899 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/debb1e25-3a83-46ad-b367-061391aae6f8-etc-cni-netd\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.278043 kubelet[2926]: I1213 04:55:10.277942 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/debb1e25-3a83-46ad-b367-061391aae6f8-lib-modules\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.278043 kubelet[2926]: I1213 04:55:10.277988 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/debb1e25-3a83-46ad-b367-061391aae6f8-host-proc-sys-kernel\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.278884 kubelet[2926]: I1213 04:55:10.278061 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/debb1e25-3a83-46ad-b367-061391aae6f8-cni-path\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.278884 kubelet[2926]: I1213 04:55:10.278100 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/debb1e25-3a83-46ad-b367-061391aae6f8-cilium-ipsec-secrets\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.278884 kubelet[2926]: I1213 04:55:10.278136 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/debb1e25-3a83-46ad-b367-061391aae6f8-xtables-lock\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.278884 kubelet[2926]: I1213 04:55:10.278169 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/debb1e25-3a83-46ad-b367-061391aae6f8-host-proc-sys-net\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.278884 kubelet[2926]: I1213 04:55:10.278199 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/debb1e25-3a83-46ad-b367-061391aae6f8-hubble-tls\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.278884 kubelet[2926]: I1213 04:55:10.278230 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djc2w\" (UniqueName: \"kubernetes.io/projected/debb1e25-3a83-46ad-b367-061391aae6f8-kube-api-access-djc2w\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.279336 kubelet[2926]: I1213 04:55:10.278261 2926 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/debb1e25-3a83-46ad-b367-061391aae6f8-cilium-run\") pod \"cilium-99djs\" (UID: \"debb1e25-3a83-46ad-b367-061391aae6f8\") " pod="kube-system/cilium-99djs" Dec 13 04:55:10.335672 sshd[4691]: pam_unix(sshd:session): session closed for user core Dec 13 04:55:10.375086 systemd[1]: sshd@25-10.244.18.230:22-147.75.109.163:54106.service: Deactivated successfully. Dec 13 04:55:10.389434 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 04:55:10.400709 systemd-logind[1609]: Session 28 logged out. Waiting for processes to exit. Dec 13 04:55:10.428831 systemd-logind[1609]: Removed session 28. Dec 13 04:55:10.486284 systemd[1]: Started sshd@26-10.244.18.230:22-147.75.109.163:54120.service - OpenSSH per-connection server daemon (147.75.109.163:54120). Dec 13 04:55:10.552161 containerd[1627]: time="2024-12-13T04:55:10.551988201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-99djs,Uid:debb1e25-3a83-46ad-b367-061391aae6f8,Namespace:kube-system,Attempt:0,}" Dec 13 04:55:10.616011 containerd[1627]: time="2024-12-13T04:55:10.615294775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:55:10.616011 containerd[1627]: time="2024-12-13T04:55:10.615413393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:55:10.616011 containerd[1627]: time="2024-12-13T04:55:10.615441589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:55:10.616011 containerd[1627]: time="2024-12-13T04:55:10.615813391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:55:10.694929 containerd[1627]: time="2024-12-13T04:55:10.693482570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-99djs,Uid:debb1e25-3a83-46ad-b367-061391aae6f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d10458d7f59e06081747cd01774254b60f3d48f7c8929e69dfb13584d34d47df\"" Dec 13 04:55:10.702833 containerd[1627]: time="2024-12-13T04:55:10.702757181Z" level=info msg="CreateContainer within sandbox \"d10458d7f59e06081747cd01774254b60f3d48f7c8929e69dfb13584d34d47df\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:55:10.719156 containerd[1627]: time="2024-12-13T04:55:10.719093797Z" level=info msg="CreateContainer within sandbox \"d10458d7f59e06081747cd01774254b60f3d48f7c8929e69dfb13584d34d47df\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c3bf5bd9ec271402bd1523ae8aeda3c08ea154d3a8dfd46192ae88b7ff1d6769\"" Dec 13 04:55:10.720159 containerd[1627]: time="2024-12-13T04:55:10.720123424Z" level=info msg="StartContainer for \"c3bf5bd9ec271402bd1523ae8aeda3c08ea154d3a8dfd46192ae88b7ff1d6769\"" Dec 13 04:55:10.846551 containerd[1627]: time="2024-12-13T04:55:10.843256435Z" level=info msg="StartContainer for \"c3bf5bd9ec271402bd1523ae8aeda3c08ea154d3a8dfd46192ae88b7ff1d6769\" returns successfully" Dec 13 04:55:10.900552 containerd[1627]: time="2024-12-13T04:55:10.900416035Z" level=info msg="shim disconnected" id=c3bf5bd9ec271402bd1523ae8aeda3c08ea154d3a8dfd46192ae88b7ff1d6769 namespace=k8s.io Dec 13 04:55:10.900552 containerd[1627]: time="2024-12-13T04:55:10.900548699Z" level=warning msg="cleaning up after shim disconnected" id=c3bf5bd9ec271402bd1523ae8aeda3c08ea154d3a8dfd46192ae88b7ff1d6769 namespace=k8s.io Dec 13 04:55:10.900552 containerd[1627]: time="2024-12-13T04:55:10.900571279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:11.386852 sshd[4709]: Accepted publickey for core from 147.75.109.163 port 54120 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:55:11.389193 sshd[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:55:11.397426 systemd-logind[1609]: New session 29 of user core. Dec 13 04:55:11.410416 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 04:55:11.487164 containerd[1627]: time="2024-12-13T04:55:11.487111851Z" level=info msg="CreateContainer within sandbox \"d10458d7f59e06081747cd01774254b60f3d48f7c8929e69dfb13584d34d47df\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:55:11.537708 containerd[1627]: time="2024-12-13T04:55:11.536954243Z" level=info msg="CreateContainer within sandbox \"d10458d7f59e06081747cd01774254b60f3d48f7c8929e69dfb13584d34d47df\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d44e9ce1b796b7481db22ea14bebd22bb863e2e6a2939a0465efb29763e1b12b\"" Dec 13 04:55:11.539169 containerd[1627]: time="2024-12-13T04:55:11.539121252Z" level=info msg="StartContainer for \"d44e9ce1b796b7481db22ea14bebd22bb863e2e6a2939a0465efb29763e1b12b\"" Dec 13 04:55:11.626038 containerd[1627]: time="2024-12-13T04:55:11.625982140Z" level=info msg="StartContainer for \"d44e9ce1b796b7481db22ea14bebd22bb863e2e6a2939a0465efb29763e1b12b\" returns successfully" Dec 13 04:55:11.667465 containerd[1627]: time="2024-12-13T04:55:11.667062878Z" level=info msg="shim disconnected" id=d44e9ce1b796b7481db22ea14bebd22bb863e2e6a2939a0465efb29763e1b12b namespace=k8s.io Dec 13 04:55:11.667465 containerd[1627]: time="2024-12-13T04:55:11.667166177Z" level=warning msg="cleaning up after shim disconnected" id=d44e9ce1b796b7481db22ea14bebd22bb863e2e6a2939a0465efb29763e1b12b namespace=k8s.io Dec 13 04:55:11.667465 containerd[1627]: time="2024-12-13T04:55:11.667182579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:11.684942 containerd[1627]: time="2024-12-13T04:55:11.684870904Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:55:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 04:55:12.008195 sshd[4709]: pam_unix(sshd:session): session closed for user core Dec 13 04:55:12.013062 systemd[1]: sshd@26-10.244.18.230:22-147.75.109.163:54120.service: Deactivated successfully. Dec 13 04:55:12.016621 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 04:55:12.016677 systemd-logind[1609]: Session 29 logged out. Waiting for processes to exit. Dec 13 04:55:12.020977 systemd-logind[1609]: Removed session 29. Dec 13 04:55:12.159373 systemd[1]: Started sshd@27-10.244.18.230:22-147.75.109.163:54136.service - OpenSSH per-connection server daemon (147.75.109.163:54136). Dec 13 04:55:12.413861 systemd[1]: run-containerd-runc-k8s.io-d44e9ce1b796b7481db22ea14bebd22bb863e2e6a2939a0465efb29763e1b12b-runc.qkbJw2.mount: Deactivated successfully. Dec 13 04:55:12.414633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d44e9ce1b796b7481db22ea14bebd22bb863e2e6a2939a0465efb29763e1b12b-rootfs.mount: Deactivated successfully. Dec 13 04:55:12.490257 containerd[1627]: time="2024-12-13T04:55:12.489999167Z" level=info msg="CreateContainer within sandbox \"d10458d7f59e06081747cd01774254b60f3d48f7c8929e69dfb13584d34d47df\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:55:12.519828 containerd[1627]: time="2024-12-13T04:55:12.519527973Z" level=info msg="CreateContainer within sandbox \"d10458d7f59e06081747cd01774254b60f3d48f7c8929e69dfb13584d34d47df\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c2711b896cea8591807d5a7f28c6e0355b8e1008a63c4f6b51b577856f24e4f9\"" Dec 13 04:55:12.520794 containerd[1627]: time="2024-12-13T04:55:12.520601975Z" level=info msg="StartContainer for \"c2711b896cea8591807d5a7f28c6e0355b8e1008a63c4f6b51b577856f24e4f9\"" Dec 13 04:55:12.637334 containerd[1627]: time="2024-12-13T04:55:12.637264845Z" level=info msg="StartContainer for \"c2711b896cea8591807d5a7f28c6e0355b8e1008a63c4f6b51b577856f24e4f9\" returns successfully" Dec 13 04:55:12.683334 containerd[1627]: time="2024-12-13T04:55:12.682901997Z" level=info msg="shim disconnected" id=c2711b896cea8591807d5a7f28c6e0355b8e1008a63c4f6b51b577856f24e4f9 namespace=k8s.io Dec 13 04:55:12.683334 containerd[1627]: time="2024-12-13T04:55:12.683211004Z" level=warning msg="cleaning up after shim disconnected" id=c2711b896cea8591807d5a7f28c6e0355b8e1008a63c4f6b51b577856f24e4f9 namespace=k8s.io Dec 13 04:55:12.684006 containerd[1627]: time="2024-12-13T04:55:12.683880610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:12.704090 containerd[1627]: time="2024-12-13T04:55:12.704017819Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:55:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 04:55:13.057867 sshd[4880]: Accepted publickey for core from 147.75.109.163 port 54136 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 04:55:13.060126 sshd[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 04:55:13.068099 systemd-logind[1609]: New session 30 of user core. Dec 13 04:55:13.077196 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 04:55:13.413824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2711b896cea8591807d5a7f28c6e0355b8e1008a63c4f6b51b577856f24e4f9-rootfs.mount: Deactivated successfully. Dec 13 04:55:13.497991 containerd[1627]: time="2024-12-13T04:55:13.497250493Z" level=info msg="CreateContainer within sandbox \"d10458d7f59e06081747cd01774254b60f3d48f7c8929e69dfb13584d34d47df\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:55:13.523904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867354858.mount: Deactivated successfully. Dec 13 04:55:13.529280 containerd[1627]: time="2024-12-13T04:55:13.529156674Z" level=info msg="CreateContainer within sandbox \"d10458d7f59e06081747cd01774254b60f3d48f7c8929e69dfb13584d34d47df\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8a6cf39c4b533eb9d51abf822cdc3f7a8e9edd596e7baaf718abaa98cc1e695f\"" Dec 13 04:55:13.532618 containerd[1627]: time="2024-12-13T04:55:13.531005074Z" level=info msg="StartContainer for \"8a6cf39c4b533eb9d51abf822cdc3f7a8e9edd596e7baaf718abaa98cc1e695f\"" Dec 13 04:55:13.681006 containerd[1627]: time="2024-12-13T04:55:13.680810610Z" level=info msg="StartContainer for \"8a6cf39c4b533eb9d51abf822cdc3f7a8e9edd596e7baaf718abaa98cc1e695f\" returns successfully" Dec 13 04:55:13.734851 containerd[1627]: time="2024-12-13T04:55:13.731536299Z" level=info msg="shim disconnected" id=8a6cf39c4b533eb9d51abf822cdc3f7a8e9edd596e7baaf718abaa98cc1e695f namespace=k8s.io Dec 13 04:55:13.734851 containerd[1627]: time="2024-12-13T04:55:13.731616424Z" level=warning msg="cleaning up after shim disconnected" id=8a6cf39c4b533eb9d51abf822cdc3f7a8e9edd596e7baaf718abaa98cc1e695f namespace=k8s.io Dec 13 04:55:13.734851 containerd[1627]: time="2024-12-13T04:55:13.731634340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 04:55:13.998203 kubelet[2926]: E1213 04:55:13.997991 2926 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:55:14.413934 systemd[1]: run-containerd-runc-k8s.io-8a6cf39c4b533eb9d51abf822cdc3f7a8e9edd596e7baaf718abaa98cc1e695f-runc.xL1zy0.mount: Deactivated successfully. Dec 13 04:55:14.414186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a6cf39c4b533eb9d51abf822cdc3f7a8e9edd596e7baaf718abaa98cc1e695f-rootfs.mount: Deactivated successfully. Dec 13 04:55:14.502070 containerd[1627]: time="2024-12-13T04:55:14.501815481Z" level=info msg="CreateContainer within sandbox \"d10458d7f59e06081747cd01774254b60f3d48f7c8929e69dfb13584d34d47df\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:55:14.534085 containerd[1627]: time="2024-12-13T04:55:14.530550930Z" level=info msg="CreateContainer within sandbox \"d10458d7f59e06081747cd01774254b60f3d48f7c8929e69dfb13584d34d47df\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc3d21c53f9b475da806c84152196783c3c9c4f951eb44d1dd6b05ff18f73970\"" Dec 13 04:55:14.536527 containerd[1627]: time="2024-12-13T04:55:14.535294612Z" level=info msg="StartContainer for \"bc3d21c53f9b475da806c84152196783c3c9c4f951eb44d1dd6b05ff18f73970\"" Dec 13 04:55:14.660123 containerd[1627]: time="2024-12-13T04:55:14.660006358Z" level=info msg="StartContainer for \"bc3d21c53f9b475da806c84152196783c3c9c4f951eb44d1dd6b05ff18f73970\" returns successfully" Dec 13 04:55:15.421414 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 04:55:15.554801 kubelet[2926]: I1213 04:55:15.552252 2926 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-99djs" podStartSLOduration=5.552153247 podStartE2EDuration="5.552153247s" podCreationTimestamp="2024-12-13 04:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:55:15.549884414 +0000 UTC m=+162.105202930" watchObservedRunningTime="2024-12-13 04:55:15.552153247 +0000 UTC m=+162.107471758" Dec 13 04:55:17.290314 kubelet[2926]: I1213 04:55:17.290231 2926 setters.go:568] "Node became not ready" node="srv-wy7pj.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T04:55:17Z","lastTransitionTime":"2024-12-13T04:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 04:55:18.396991 kubelet[2926]: E1213 04:55:18.396929 2926 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42910->127.0.0.1:38493: write tcp 127.0.0.1:42910->127.0.0.1:38493: write: broken pipe Dec 13 04:55:19.304972 systemd-networkd[1257]: lxc_health: Link UP Dec 13 04:55:19.322059 systemd-networkd[1257]: lxc_health: Gained carrier Dec 13 04:55:20.801126 systemd-networkd[1257]: lxc_health: Gained IPv6LL Dec 13 04:55:25.316392 systemd[1]: run-containerd-runc-k8s.io-bc3d21c53f9b475da806c84152196783c3c9c4f951eb44d1dd6b05ff18f73970-runc.49P42n.mount: Deactivated successfully. Dec 13 04:55:25.574899 sshd[4880]: pam_unix(sshd:session): session closed for user core Dec 13 04:55:25.583145 systemd[1]: sshd@27-10.244.18.230:22-147.75.109.163:54136.service: Deactivated successfully. Dec 13 04:55:25.597276 systemd-logind[1609]: Session 30 logged out. Waiting for processes to exit. Dec 13 04:55:25.598440 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 04:55:25.603970 systemd-logind[1609]: Removed session 30.