Dec 13 05:19:22.033731 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 05:19:22.033781 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:19:22.033795 kernel: BIOS-provided physical RAM map: Dec 13 05:19:22.033812 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 05:19:22.033822 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 05:19:22.033832 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 05:19:22.033843 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 05:19:22.033854 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 05:19:22.033864 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 05:19:22.033874 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 05:19:22.033885 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 05:19:22.033895 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 05:19:22.033911 kernel: NX (Execute Disable) protection: active Dec 13 05:19:22.033921 kernel: APIC: Static calls initialized Dec 13 05:19:22.033934 kernel: SMBIOS 2.8 present. Dec 13 05:19:22.033945 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 05:19:22.033957 kernel: Hypervisor detected: KVM Dec 13 05:19:22.033973 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 05:19:22.033984 kernel: kvm-clock: using sched offset of 4462993265 cycles Dec 13 05:19:22.033996 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 05:19:22.034008 kernel: tsc: Detected 2499.998 MHz processor Dec 13 05:19:22.034020 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 05:19:22.034032 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 05:19:22.034043 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 05:19:22.034054 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 05:19:22.034066 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 05:19:22.034082 kernel: Using GB pages for direct mapping Dec 13 05:19:22.034093 kernel: ACPI: Early table checksum verification disabled Dec 13 05:19:22.036170 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 05:19:22.036188 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:19:22.036201 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:19:22.036213 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:19:22.036224 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 05:19:22.036236 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:19:22.036248 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:19:22.036267 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:19:22.036279 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:19:22.036290 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 05:19:22.036302 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 05:19:22.036314 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 05:19:22.036332 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 05:19:22.036344 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 05:19:22.036361 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 05:19:22.036373 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 05:19:22.036385 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 05:19:22.036397 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 05:19:22.036409 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 05:19:22.036421 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 05:19:22.036432 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 05:19:22.036449 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 05:19:22.036461 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 05:19:22.036473 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 05:19:22.036485 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 05:19:22.036497 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 05:19:22.036508 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 05:19:22.036520 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 05:19:22.036532 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 05:19:22.036543 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 05:19:22.036555 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 05:19:22.036572 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 05:19:22.036584 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 05:19:22.036596 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 05:19:22.036608 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 05:19:22.036620 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 05:19:22.036632 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 05:19:22.036644 kernel: Zone ranges: Dec 13 05:19:22.036656 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 05:19:22.036668 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 05:19:22.036685 kernel: Normal empty Dec 13 05:19:22.036697 kernel: Movable zone start for each node Dec 13 05:19:22.036709 kernel: Early memory node ranges Dec 13 05:19:22.036721 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 05:19:22.036733 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 05:19:22.036744 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 05:19:22.036769 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 05:19:22.036782 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 05:19:22.036794 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 05:19:22.036806 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 05:19:22.036824 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 05:19:22.036836 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 05:19:22.036848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 05:19:22.036860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 05:19:22.036872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 05:19:22.036884 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 05:19:22.036896 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 05:19:22.036908 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 05:19:22.036919 kernel: TSC deadline timer available Dec 13 05:19:22.036936 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 05:19:22.036948 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 05:19:22.036960 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 05:19:22.036972 kernel: Booting paravirtualized kernel on KVM Dec 13 05:19:22.036984 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 05:19:22.036996 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 13 05:19:22.037008 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 05:19:22.037020 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 05:19:22.037032 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 05:19:22.037049 kernel: kvm-guest: PV spinlocks enabled Dec 13 05:19:22.037061 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 05:19:22.037074 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:19:22.037087 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 05:19:22.037110 kernel: random: crng init done Dec 13 05:19:22.037125 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 05:19:22.037137 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 05:19:22.037149 kernel: Fallback order for Node 0: 0 Dec 13 05:19:22.037167 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 05:19:22.037179 kernel: Policy zone: DMA32 Dec 13 05:19:22.037191 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 05:19:22.037203 kernel: software IO TLB: area num 16. Dec 13 05:19:22.037215 kernel: Memory: 1901520K/2096616K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 194836K reserved, 0K cma-reserved) Dec 13 05:19:22.037228 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 05:19:22.037240 kernel: Kernel/User page tables isolation: enabled Dec 13 05:19:22.037251 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 05:19:22.037263 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 05:19:22.037280 kernel: Dynamic Preempt: voluntary Dec 13 05:19:22.037292 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 05:19:22.037305 kernel: rcu: RCU event tracing is enabled. Dec 13 05:19:22.037318 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 05:19:22.037330 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 05:19:22.037355 kernel: Rude variant of Tasks RCU enabled. Dec 13 05:19:22.037372 kernel: Tracing variant of Tasks RCU enabled. Dec 13 05:19:22.037385 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 05:19:22.037397 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 05:19:22.037410 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 05:19:22.037422 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 05:19:22.037439 kernel: Console: colour VGA+ 80x25 Dec 13 05:19:22.037452 kernel: printk: console [tty0] enabled Dec 13 05:19:22.037465 kernel: printk: console [ttyS0] enabled Dec 13 05:19:22.037477 kernel: ACPI: Core revision 20230628 Dec 13 05:19:22.037490 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 05:19:22.037502 kernel: x2apic enabled Dec 13 05:19:22.037520 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 05:19:22.037533 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 05:19:22.037546 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 05:19:22.037558 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 05:19:22.037571 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 05:19:22.037584 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 05:19:22.037596 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 05:19:22.037608 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 05:19:22.037620 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 05:19:22.037638 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 05:19:22.037650 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 05:19:22.037663 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 05:19:22.037675 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 05:19:22.037687 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 05:19:22.037700 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 05:19:22.037712 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 05:19:22.037724 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 05:19:22.037737 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 05:19:22.037749 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 05:19:22.037773 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 05:19:22.037791 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 05:19:22.037804 kernel: Freeing SMP alternatives memory: 32K Dec 13 05:19:22.037816 kernel: pid_max: default: 32768 minimum: 301 Dec 13 05:19:22.037829 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 05:19:22.037841 kernel: landlock: Up and running. Dec 13 05:19:22.037853 kernel: SELinux: Initializing. Dec 13 05:19:22.037866 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 05:19:22.037878 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 05:19:22.037891 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 05:19:22.037904 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:19:22.037917 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:19:22.037935 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:19:22.037947 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 05:19:22.037960 kernel: signal: max sigframe size: 1776 Dec 13 05:19:22.037973 kernel: rcu: Hierarchical SRCU implementation. Dec 13 05:19:22.037985 kernel: rcu: Max phase no-delay instances is 400. Dec 13 05:19:22.037998 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 05:19:22.038011 kernel: smp: Bringing up secondary CPUs ... Dec 13 05:19:22.038023 kernel: smpboot: x86: Booting SMP configuration: Dec 13 05:19:22.038036 kernel: .... node #0, CPUs: #1 Dec 13 05:19:22.038053 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 05:19:22.038066 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 05:19:22.038078 kernel: smpboot: Max logical packages: 16 Dec 13 05:19:22.038091 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 05:19:22.042166 kernel: devtmpfs: initialized Dec 13 05:19:22.042194 kernel: x86/mm: Memory block size: 128MB Dec 13 05:19:22.042208 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 05:19:22.042222 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 05:19:22.042235 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 05:19:22.042259 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 05:19:22.042273 kernel: audit: initializing netlink subsys (disabled) Dec 13 05:19:22.042286 kernel: audit: type=2000 audit(1734067160.165:1): state=initialized audit_enabled=0 res=1 Dec 13 05:19:22.042299 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 05:19:22.042312 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 05:19:22.042325 kernel: cpuidle: using governor menu Dec 13 05:19:22.042338 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 05:19:22.042350 kernel: dca service started, version 1.12.1 Dec 13 05:19:22.042363 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 05:19:22.042382 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 05:19:22.042395 kernel: PCI: Using configuration type 1 for base access Dec 13 05:19:22.042408 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 05:19:22.042421 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 05:19:22.042434 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 05:19:22.042447 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 05:19:22.042460 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 05:19:22.042473 kernel: ACPI: Added _OSI(Module Device) Dec 13 05:19:22.042485 kernel: ACPI: Added _OSI(Processor Device) Dec 13 05:19:22.042503 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 05:19:22.042516 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 05:19:22.042529 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 05:19:22.042542 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 05:19:22.042555 kernel: ACPI: Interpreter enabled Dec 13 05:19:22.042567 kernel: ACPI: PM: (supports S0 S5) Dec 13 05:19:22.042580 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 05:19:22.042593 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 05:19:22.042606 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 05:19:22.042624 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 05:19:22.042637 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 05:19:22.042914 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 05:19:22.043095 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 05:19:22.043294 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 05:19:22.043314 kernel: PCI host bridge to bus 0000:00 Dec 13 05:19:22.043496 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 05:19:22.043657 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 05:19:22.043821 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 05:19:22.043969 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 05:19:22.046161 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 05:19:22.046336 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 05:19:22.046492 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 05:19:22.046684 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 05:19:22.046911 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 05:19:22.047083 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 05:19:22.048316 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 05:19:22.048486 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 05:19:22.048651 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 05:19:22.048847 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 05:19:22.049027 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 05:19:22.051266 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 05:19:22.051448 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 05:19:22.051629 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 05:19:22.051812 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 05:19:22.051994 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 05:19:22.052203 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 05:19:22.052392 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 05:19:22.052556 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 05:19:22.052750 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 05:19:22.052930 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 05:19:22.055132 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 05:19:22.055338 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 05:19:22.055522 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 05:19:22.055691 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 05:19:22.055890 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 05:19:22.056059 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 05:19:22.056269 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 05:19:22.056434 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 05:19:22.056604 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 05:19:22.056794 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 05:19:22.056960 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 05:19:22.059408 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 05:19:22.059602 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 05:19:22.059805 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 05:19:22.059977 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 05:19:22.060231 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 05:19:22.060401 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 05:19:22.060564 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 05:19:22.060739 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 05:19:22.060918 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 05:19:22.061541 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 05:19:22.061738 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 05:19:22.061928 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 05:19:22.062093 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 05:19:22.062287 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:19:22.062464 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 05:19:22.062652 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 05:19:22.062852 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 05:19:22.063021 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 05:19:22.063218 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 05:19:22.063401 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 05:19:22.063573 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 05:19:22.063742 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 05:19:22.063922 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 05:19:22.064095 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:19:22.064300 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 05:19:22.064475 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 05:19:22.064642 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 05:19:22.064821 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 05:19:22.064987 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:19:22.065173 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 05:19:22.065339 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 05:19:22.065512 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:19:22.065682 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 05:19:22.065861 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 05:19:22.066028 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:19:22.066216 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 05:19:22.066381 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 05:19:22.066545 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:19:22.066711 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 05:19:22.066897 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 05:19:22.067066 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:19:22.067254 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 05:19:22.067422 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 05:19:22.067588 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:19:22.067608 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 05:19:22.067621 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 05:19:22.067634 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 05:19:22.067654 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 05:19:22.067668 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 05:19:22.067680 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 05:19:22.067693 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 05:19:22.067706 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 05:19:22.067719 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 05:19:22.067732 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 05:19:22.067744 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 05:19:22.067771 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 05:19:22.067791 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 05:19:22.067804 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 05:19:22.067817 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 05:19:22.067829 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 05:19:22.067842 kernel: iommu: Default domain type: Translated Dec 13 05:19:22.067855 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 05:19:22.067868 kernel: PCI: Using ACPI for IRQ routing Dec 13 05:19:22.067880 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 05:19:22.067893 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 05:19:22.067911 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 05:19:22.068076 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 05:19:22.068339 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 05:19:22.068501 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 05:19:22.068521 kernel: vgaarb: loaded Dec 13 05:19:22.068534 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 05:19:22.068547 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 05:19:22.068561 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 05:19:22.068581 kernel: pnp: PnP ACPI init Dec 13 05:19:22.068778 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 05:19:22.068800 kernel: pnp: PnP ACPI: found 5 devices Dec 13 05:19:22.068813 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 05:19:22.068826 kernel: NET: Registered PF_INET protocol family Dec 13 05:19:22.068839 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 05:19:22.068852 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 05:19:22.068865 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 05:19:22.068878 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 05:19:22.068898 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 05:19:22.068911 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 05:19:22.068924 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 05:19:22.068936 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 05:19:22.068949 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 05:19:22.068962 kernel: NET: Registered PF_XDP protocol family Dec 13 05:19:22.069141 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 05:19:22.069346 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 05:19:22.069529 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 05:19:22.069694 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 05:19:22.069871 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 05:19:22.070035 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 05:19:22.070291 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 05:19:22.070453 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 05:19:22.070622 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 05:19:22.070796 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 05:19:22.070957 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 05:19:22.071133 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 05:19:22.071297 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 05:19:22.071457 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 05:19:22.071619 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 05:19:22.071805 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 05:19:22.072003 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 05:19:22.072204 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 05:19:22.072371 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 05:19:22.072534 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 05:19:22.072697 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 05:19:22.072876 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:19:22.073040 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 05:19:22.073259 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 05:19:22.073498 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 05:19:22.073801 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:19:22.073978 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 05:19:22.074170 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 05:19:22.074335 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 05:19:22.074528 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:19:22.074703 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 05:19:22.074888 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 05:19:22.075064 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 05:19:22.075307 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:19:22.075472 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 05:19:22.075634 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 05:19:22.075811 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 05:19:22.075988 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:19:22.076169 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 05:19:22.076341 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 05:19:22.076502 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 05:19:22.076686 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:19:22.076899 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 05:19:22.077077 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 05:19:22.077326 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 05:19:22.077489 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:19:22.077652 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 05:19:22.077829 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 05:19:22.077992 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 05:19:22.078171 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:19:22.078328 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 05:19:22.078476 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 05:19:22.078634 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 05:19:22.078796 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 05:19:22.078946 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 05:19:22.079116 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 05:19:22.079301 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 05:19:22.079461 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 05:19:22.079629 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:19:22.079828 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 05:19:22.080011 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 05:19:22.080271 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 05:19:22.080427 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:19:22.080590 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 05:19:22.080743 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 05:19:22.080911 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:19:22.081085 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 05:19:22.081325 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 05:19:22.081481 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:19:22.081652 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 05:19:22.082226 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 05:19:22.082384 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:19:22.082551 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 05:19:22.082714 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 05:19:22.082885 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:19:22.083054 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 05:19:22.083305 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 05:19:22.083460 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:19:22.083632 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 05:19:22.083799 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 05:19:22.083961 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:19:22.083983 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 05:19:22.083997 kernel: PCI: CLS 0 bytes, default 64 Dec 13 05:19:22.084011 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 05:19:22.084024 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 05:19:22.084038 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 05:19:22.084052 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 05:19:22.084066 kernel: Initialise system trusted keyrings Dec 13 05:19:22.084086 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 05:19:22.084113 kernel: Key type asymmetric registered Dec 13 05:19:22.084129 kernel: Asymmetric key parser 'x509' registered Dec 13 05:19:22.084152 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 05:19:22.084166 kernel: io scheduler mq-deadline registered Dec 13 05:19:22.084180 kernel: io scheduler kyber registered Dec 13 05:19:22.084193 kernel: io scheduler bfq registered Dec 13 05:19:22.084361 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 05:19:22.084527 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 05:19:22.084700 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:19:22.084881 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 05:19:22.085046 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 05:19:22.085279 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:19:22.085449 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 05:19:22.085611 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 05:19:22.085796 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:19:22.085962 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 05:19:22.086141 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 05:19:22.086307 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:19:22.086471 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 05:19:22.086632 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 05:19:22.086817 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:19:22.086987 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 05:19:22.087230 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 05:19:22.087396 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:19:22.087560 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 05:19:22.087721 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 05:19:22.087906 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:19:22.088070 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 05:19:22.088249 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 05:19:22.088412 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:19:22.088433 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 05:19:22.088448 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 05:19:22.088469 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 05:19:22.088483 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 05:19:22.088497 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 05:19:22.088511 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 05:19:22.088525 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 05:19:22.088538 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 05:19:22.088552 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 05:19:22.088720 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 05:19:22.088902 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 05:19:22.089057 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T05:19:21 UTC (1734067161) Dec 13 05:19:22.089264 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 05:19:22.089285 kernel: intel_pstate: CPU model not supported Dec 13 05:19:22.089299 kernel: NET: Registered PF_INET6 protocol family Dec 13 05:19:22.089312 kernel: Segment Routing with IPv6 Dec 13 05:19:22.089325 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 05:19:22.089339 kernel: NET: Registered PF_PACKET protocol family Dec 13 05:19:22.089352 kernel: Key type dns_resolver registered Dec 13 05:19:22.089373 kernel: IPI shorthand broadcast: enabled Dec 13 05:19:22.089387 kernel: sched_clock: Marking stable (1328015811, 232301980)->(1703369508, -143051717) Dec 13 05:19:22.089401 kernel: registered taskstats version 1 Dec 13 05:19:22.089414 kernel: Loading compiled-in X.509 certificates Dec 13 05:19:22.089428 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 05:19:22.089441 kernel: Key type .fscrypt registered Dec 13 05:19:22.089455 kernel: Key type fscrypt-provisioning registered Dec 13 05:19:22.089469 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 05:19:22.089482 kernel: ima: Allocated hash algorithm: sha1 Dec 13 05:19:22.089501 kernel: ima: No architecture policies found Dec 13 05:19:22.089514 kernel: clk: Disabling unused clocks Dec 13 05:19:22.089528 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 05:19:22.089541 kernel: Write protecting the kernel read-only data: 36864k Dec 13 05:19:22.089555 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 05:19:22.089568 kernel: Run /init as init process Dec 13 05:19:22.089581 kernel: with arguments: Dec 13 05:19:22.089595 kernel: /init Dec 13 05:19:22.089608 kernel: with environment: Dec 13 05:19:22.089626 kernel: HOME=/ Dec 13 05:19:22.089639 kernel: TERM=linux Dec 13 05:19:22.089652 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 05:19:22.089669 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 05:19:22.089685 systemd[1]: Detected virtualization kvm. Dec 13 05:19:22.089700 systemd[1]: Detected architecture x86-64. Dec 13 05:19:22.089713 systemd[1]: Running in initrd. Dec 13 05:19:22.089727 systemd[1]: No hostname configured, using default hostname. Dec 13 05:19:22.089746 systemd[1]: Hostname set to . Dec 13 05:19:22.089777 systemd[1]: Initializing machine ID from VM UUID. Dec 13 05:19:22.089792 systemd[1]: Queued start job for default target initrd.target. Dec 13 05:19:22.089806 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:19:22.089820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:19:22.089835 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 05:19:22.089850 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 05:19:22.089871 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 05:19:22.089886 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 05:19:22.089902 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 05:19:22.089918 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 05:19:22.089932 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:19:22.089946 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:19:22.089961 systemd[1]: Reached target paths.target - Path Units. Dec 13 05:19:22.089980 systemd[1]: Reached target slices.target - Slice Units. Dec 13 05:19:22.089995 systemd[1]: Reached target swap.target - Swaps. Dec 13 05:19:22.090009 systemd[1]: Reached target timers.target - Timer Units. Dec 13 05:19:22.090024 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 05:19:22.090039 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 05:19:22.090053 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 05:19:22.090067 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 05:19:22.090082 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:19:22.090096 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 05:19:22.090133 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:19:22.090147 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 05:19:22.090162 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 05:19:22.090176 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 05:19:22.090190 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 05:19:22.090205 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 05:19:22.090219 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 05:19:22.090233 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 05:19:22.090290 systemd-journald[200]: Collecting audit messages is disabled. Dec 13 05:19:22.090332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:19:22.090347 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 05:19:22.090362 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:19:22.090381 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 05:19:22.090397 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 05:19:22.090412 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 05:19:22.090427 systemd-journald[200]: Journal started Dec 13 05:19:22.090458 systemd-journald[200]: Runtime Journal (/run/log/journal/256cb57081ed460d9fedde7da1bbb8f2) is 4.7M, max 38.0M, 33.2M free. Dec 13 05:19:22.075176 systemd-modules-load[201]: Inserted module 'overlay' Dec 13 05:19:22.153560 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 05:19:22.153595 kernel: Bridge firewalling registered Dec 13 05:19:22.153614 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 05:19:22.116587 systemd-modules-load[201]: Inserted module 'br_netfilter' Dec 13 05:19:22.154575 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 05:19:22.156012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:19:22.169320 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:19:22.172301 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 05:19:22.179089 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 05:19:22.188299 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 05:19:22.197712 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:19:22.211688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:19:22.212860 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:19:22.223361 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 05:19:22.225993 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:19:22.235270 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 05:19:22.242883 dracut-cmdline[234]: dracut-dracut-053 Dec 13 05:19:22.246889 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:19:22.283291 systemd-resolved[237]: Positive Trust Anchors: Dec 13 05:19:22.283318 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 05:19:22.283362 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 05:19:22.289559 systemd-resolved[237]: Defaulting to hostname 'linux'. Dec 13 05:19:22.291394 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 05:19:22.292280 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:19:22.360822 kernel: SCSI subsystem initialized Dec 13 05:19:22.372156 kernel: Loading iSCSI transport class v2.0-870. Dec 13 05:19:22.386142 kernel: iscsi: registered transport (tcp) Dec 13 05:19:22.413027 kernel: iscsi: registered transport (qla4xxx) Dec 13 05:19:22.413141 kernel: QLogic iSCSI HBA Driver Dec 13 05:19:22.470730 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 05:19:22.476316 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 05:19:22.521031 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 05:19:22.521148 kernel: device-mapper: uevent: version 1.0.3 Dec 13 05:19:22.524145 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 05:19:22.572184 kernel: raid6: sse2x4 gen() 13662 MB/s Dec 13 05:19:22.599960 kernel: raid6: sse2x2 gen() 9187 MB/s Dec 13 05:19:22.610909 kernel: raid6: sse2x1 gen() 9247 MB/s Dec 13 05:19:22.611005 kernel: raid6: using algorithm sse2x4 gen() 13662 MB/s Dec 13 05:19:22.629889 kernel: raid6: .... xor() 7349 MB/s, rmw enabled Dec 13 05:19:22.630004 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 05:19:22.657176 kernel: xor: automatically using best checksumming function avx Dec 13 05:19:22.847159 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 05:19:22.862813 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 05:19:22.875453 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:19:22.891318 systemd-udevd[419]: Using default interface naming scheme 'v255'. Dec 13 05:19:22.899495 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:19:22.922421 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 05:19:22.942526 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Dec 13 05:19:22.982643 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 05:19:22.989351 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 05:19:23.094763 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:19:23.102365 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 05:19:23.128713 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 05:19:23.130842 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 05:19:23.132711 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:19:23.135032 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 05:19:23.144326 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 05:19:23.175369 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 05:19:23.220133 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 13 05:19:23.290819 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 05:19:23.291023 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 05:19:23.291046 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 05:19:23.291065 kernel: GPT:17805311 != 125829119 Dec 13 05:19:23.291083 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 05:19:23.291122 kernel: GPT:17805311 != 125829119 Dec 13 05:19:23.291145 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 05:19:23.291174 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:19:23.262767 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 05:19:23.262957 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:19:23.264043 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:19:23.264822 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 05:19:23.264992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:19:23.266261 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:19:23.279469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:19:23.300121 kernel: ACPI: bus type USB registered Dec 13 05:19:23.303126 kernel: usbcore: registered new interface driver usbfs Dec 13 05:19:23.303160 kernel: usbcore: registered new interface driver hub Dec 13 05:19:23.304595 kernel: usbcore: registered new device driver usb Dec 13 05:19:23.314128 kernel: AVX version of gcm_enc/dec engaged. Dec 13 05:19:23.322137 kernel: AES CTR mode by8 optimization enabled Dec 13 05:19:23.323124 kernel: libata version 3.00 loaded. Dec 13 05:19:23.340145 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 05:19:23.380183 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 05:19:23.380217 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 05:19:23.380440 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 05:19:23.380696 kernel: scsi host0: ahci Dec 13 05:19:23.380949 kernel: scsi host1: ahci Dec 13 05:19:23.381197 kernel: scsi host2: ahci Dec 13 05:19:23.381503 kernel: scsi host3: ahci Dec 13 05:19:23.381694 kernel: scsi host4: ahci Dec 13 05:19:23.381897 kernel: scsi host5: ahci Dec 13 05:19:23.382083 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Dec 13 05:19:23.382792 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Dec 13 05:19:23.382817 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Dec 13 05:19:23.382836 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Dec 13 05:19:23.382853 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Dec 13 05:19:23.382871 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Dec 13 05:19:23.388069 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (480) Dec 13 05:19:23.410150 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (483) Dec 13 05:19:23.425801 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 05:19:23.450624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:19:23.459342 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 05:19:23.465506 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 05:19:23.466347 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 05:19:23.475650 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 05:19:23.482300 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 05:19:23.485281 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:19:23.493247 disk-uuid[564]: Primary Header is updated. Dec 13 05:19:23.493247 disk-uuid[564]: Secondary Entries is updated. Dec 13 05:19:23.493247 disk-uuid[564]: Secondary Header is updated. Dec 13 05:19:23.501932 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:19:23.509123 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:19:23.515126 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:19:23.521777 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:19:23.693365 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 05:19:23.693453 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 05:19:23.693476 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 05:19:23.694353 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 05:19:23.697127 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 05:19:23.699198 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 05:19:23.719336 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 05:19:23.739014 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 05:19:23.739281 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 05:19:23.739488 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 05:19:23.739701 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 05:19:23.739922 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 05:19:23.740136 kernel: hub 1-0:1.0: USB hub found Dec 13 05:19:23.740366 kernel: hub 1-0:1.0: 4 ports detected Dec 13 05:19:23.740562 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 05:19:23.740850 kernel: hub 2-0:1.0: USB hub found Dec 13 05:19:23.741080 kernel: hub 2-0:1.0: 4 ports detected Dec 13 05:19:23.981809 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 05:19:24.116137 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 05:19:24.123667 kernel: usbcore: registered new interface driver usbhid Dec 13 05:19:24.123759 kernel: usbhid: USB HID core driver Dec 13 05:19:24.131139 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 05:19:24.135131 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 05:19:24.519949 disk-uuid[565]: The operation has completed successfully. Dec 13 05:19:24.521038 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:19:24.575182 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 05:19:24.577093 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 05:19:24.595381 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 05:19:24.601907 sh[591]: Success Dec 13 05:19:24.619160 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 05:19:24.688053 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 05:19:24.697408 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 05:19:24.699359 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 05:19:24.730255 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 05:19:24.730320 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:19:24.732388 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 05:19:24.734576 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 05:19:24.736242 kernel: BTRFS info (device dm-0): using free space tree Dec 13 05:19:24.745859 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 05:19:24.747267 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 05:19:24.756346 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 05:19:24.758286 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 05:19:24.777610 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:19:24.777673 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:19:24.777705 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:19:24.782123 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:19:24.796056 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 05:19:24.798596 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:19:24.806172 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 05:19:24.816395 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 05:19:24.919036 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 05:19:24.938763 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 05:19:24.972524 ignition[686]: Ignition 2.19.0 Dec 13 05:19:24.972547 ignition[686]: Stage: fetch-offline Dec 13 05:19:24.972631 ignition[686]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:19:24.972650 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:19:24.975202 systemd-networkd[774]: lo: Link UP Dec 13 05:19:24.972830 ignition[686]: parsed url from cmdline: "" Dec 13 05:19:24.975209 systemd-networkd[774]: lo: Gained carrier Dec 13 05:19:24.972837 ignition[686]: no config URL provided Dec 13 05:19:24.976533 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 05:19:24.972847 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 05:19:24.978970 systemd-networkd[774]: Enumeration completed Dec 13 05:19:24.972862 ignition[686]: no config at "/usr/lib/ignition/user.ign" Dec 13 05:19:24.979159 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 05:19:24.972872 ignition[686]: failed to fetch config: resource requires networking Dec 13 05:19:24.980204 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:19:24.973163 ignition[686]: Ignition finished successfully Dec 13 05:19:24.980210 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 05:19:24.981451 systemd[1]: Reached target network.target - Network. Dec 13 05:19:24.981816 systemd-networkd[774]: eth0: Link UP Dec 13 05:19:24.981822 systemd-networkd[774]: eth0: Gained carrier Dec 13 05:19:24.981834 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:19:24.991345 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 05:19:25.013509 ignition[781]: Ignition 2.19.0 Dec 13 05:19:25.013536 ignition[781]: Stage: fetch Dec 13 05:19:25.013895 ignition[781]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:19:25.014274 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:19:25.014417 ignition[781]: parsed url from cmdline: "" Dec 13 05:19:25.014433 ignition[781]: no config URL provided Dec 13 05:19:25.014451 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 05:19:25.014470 ignition[781]: no config at "/usr/lib/ignition/user.ign" Dec 13 05:19:25.014602 ignition[781]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 05:19:25.014660 ignition[781]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 05:19:25.014672 ignition[781]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 05:19:25.018394 ignition[781]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 05:19:25.044267 systemd-networkd[774]: eth0: DHCPv4 address 10.244.19.70/30, gateway 10.244.19.69 acquired from 10.244.19.69 Dec 13 05:19:25.219134 ignition[781]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Dec 13 05:19:25.236505 ignition[781]: GET result: OK Dec 13 05:19:25.236707 ignition[781]: parsing config with SHA512: ec7b1d4641f78bebb18bfc001129f7c3f94d9a872f624adcb1225135641d51e5bc9e40bd1a32569c6284434778fd6a22a643532c4749af16ac7d6a7daf62701d Dec 13 05:19:25.241960 unknown[781]: fetched base config from "system" Dec 13 05:19:25.241980 unknown[781]: fetched base config from "system" Dec 13 05:19:25.242482 ignition[781]: fetch: fetch complete Dec 13 05:19:25.241990 unknown[781]: fetched user config from "openstack" Dec 13 05:19:25.242491 ignition[781]: fetch: fetch passed Dec 13 05:19:25.244271 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 05:19:25.242558 ignition[781]: Ignition finished successfully Dec 13 05:19:25.260383 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 05:19:25.281866 ignition[788]: Ignition 2.19.0 Dec 13 05:19:25.281888 ignition[788]: Stage: kargs Dec 13 05:19:25.282188 ignition[788]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:19:25.282208 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:19:25.285267 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 05:19:25.283397 ignition[788]: kargs: kargs passed Dec 13 05:19:25.283478 ignition[788]: Ignition finished successfully Dec 13 05:19:25.301368 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 05:19:25.320415 ignition[794]: Ignition 2.19.0 Dec 13 05:19:25.320437 ignition[794]: Stage: disks Dec 13 05:19:25.320677 ignition[794]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:19:25.320711 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:19:25.323396 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 05:19:25.321900 ignition[794]: disks: disks passed Dec 13 05:19:25.324768 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 05:19:25.321983 ignition[794]: Ignition finished successfully Dec 13 05:19:25.326078 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 05:19:25.327578 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 05:19:25.328907 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 05:19:25.330439 systemd[1]: Reached target basic.target - Basic System. Dec 13 05:19:25.339734 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 05:19:25.357493 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 05:19:25.361039 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 05:19:25.367411 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 05:19:25.483147 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 05:19:25.484362 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 05:19:25.485654 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 05:19:25.496262 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 05:19:25.499269 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 05:19:25.500586 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 05:19:25.506680 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 05:19:25.519194 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) Dec 13 05:19:25.519237 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:19:25.519274 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:19:25.519295 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:19:25.517827 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 05:19:25.522973 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:19:25.517887 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 05:19:25.526487 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 05:19:25.527473 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 05:19:25.537378 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 05:19:25.623080 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 05:19:25.631196 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Dec 13 05:19:25.639747 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 05:19:25.648456 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 05:19:25.754014 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 05:19:25.760224 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 05:19:25.762359 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 05:19:25.775699 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 05:19:25.779146 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:19:25.804334 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 05:19:25.808256 ignition[928]: INFO : Ignition 2.19.0 Dec 13 05:19:25.809992 ignition[928]: INFO : Stage: mount Dec 13 05:19:25.809992 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:19:25.809992 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:19:25.809992 ignition[928]: INFO : mount: mount passed Dec 13 05:19:25.813970 ignition[928]: INFO : Ignition finished successfully Dec 13 05:19:25.811092 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 05:19:26.307406 systemd-networkd[774]: eth0: Gained IPv6LL Dec 13 05:19:27.816410 systemd-networkd[774]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:4d1:24:19ff:fef4:1346/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:4d1:24:19ff:fef4:1346/64 assigned by NDisc. Dec 13 05:19:27.816427 systemd-networkd[774]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 05:19:32.697168 coreos-metadata[812]: Dec 13 05:19:32.697 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:19:32.720974 coreos-metadata[812]: Dec 13 05:19:32.720 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 05:19:32.733074 coreos-metadata[812]: Dec 13 05:19:32.733 INFO Fetch successful Dec 13 05:19:32.734041 coreos-metadata[812]: Dec 13 05:19:32.733 INFO wrote hostname srv-ch81y.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 05:19:32.736067 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 05:19:32.736307 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 05:19:32.743246 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 05:19:32.757314 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 05:19:32.774150 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) Dec 13 05:19:32.779524 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:19:32.779566 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:19:32.781406 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:19:32.788268 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:19:32.790432 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 05:19:32.826546 ignition[962]: INFO : Ignition 2.19.0 Dec 13 05:19:32.826546 ignition[962]: INFO : Stage: files Dec 13 05:19:32.828604 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:19:32.828604 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:19:32.828604 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Dec 13 05:19:32.831959 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 05:19:32.831959 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 05:19:32.834349 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 05:19:32.834349 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 05:19:32.834349 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 05:19:32.833848 unknown[962]: wrote ssh authorized keys file for user: core Dec 13 05:19:32.838655 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 05:19:32.838655 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 05:19:33.032427 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 05:19:34.656672 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 05:19:34.656672 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 05:19:34.656672 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 05:19:35.300518 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 05:19:35.744711 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 05:19:35.744711 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 05:19:35.747960 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 05:19:35.747960 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 05:19:35.747960 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 05:19:35.747960 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 05:19:35.747960 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 05:19:35.747960 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 05:19:35.747960 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 05:19:35.747960 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 05:19:35.747960 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 05:19:35.747960 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 05:19:35.747960 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 05:19:35.747960 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 05:19:35.747960 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 05:19:36.266506 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 05:19:38.471954 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 05:19:38.471954 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 05:19:38.476220 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 05:19:38.476220 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 05:19:38.476220 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 05:19:38.476220 ignition[962]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 05:19:38.476220 ignition[962]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 05:19:38.484839 ignition[962]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 05:19:38.484839 ignition[962]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 05:19:38.484839 ignition[962]: INFO : files: files passed Dec 13 05:19:38.484839 ignition[962]: INFO : Ignition finished successfully Dec 13 05:19:38.480602 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 05:19:38.492473 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 05:19:38.502434 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 05:19:38.509794 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 05:19:38.510008 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 05:19:38.522276 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:19:38.522276 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:19:38.525171 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:19:38.525734 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 05:19:38.527885 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 05:19:38.533370 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 05:19:38.584254 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 05:19:38.584449 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 05:19:38.586541 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 05:19:38.587799 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 05:19:38.589541 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 05:19:38.596308 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 05:19:38.616400 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 05:19:38.621817 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 05:19:38.644490 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:19:38.645553 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:19:38.647241 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 05:19:38.648801 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 05:19:38.649002 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 05:19:38.650831 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 05:19:38.651733 systemd[1]: Stopped target basic.target - Basic System. Dec 13 05:19:38.654942 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 05:19:38.655871 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 05:19:38.657470 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 05:19:38.659169 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 05:19:38.660711 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 05:19:38.662527 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 05:19:38.664212 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 05:19:38.665869 systemd[1]: Stopped target swap.target - Swaps. Dec 13 05:19:38.667254 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 05:19:38.667512 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 05:19:38.669317 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:19:38.670286 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:19:38.672157 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 05:19:38.672591 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:19:38.673812 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 05:19:38.673997 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 05:19:38.676222 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 05:19:38.676398 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 05:19:38.677395 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 05:19:38.677568 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 05:19:38.685510 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 05:19:38.692204 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 05:19:38.692539 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:19:38.700463 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 05:19:38.701448 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 05:19:38.701869 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:19:38.705502 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 05:19:38.705766 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 05:19:38.724444 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 05:19:38.724652 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 05:19:38.731803 ignition[1014]: INFO : Ignition 2.19.0 Dec 13 05:19:38.734126 ignition[1014]: INFO : Stage: umount Dec 13 05:19:38.734126 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:19:38.734126 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:19:38.737450 ignition[1014]: INFO : umount: umount passed Dec 13 05:19:38.737450 ignition[1014]: INFO : Ignition finished successfully Dec 13 05:19:38.739493 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 05:19:38.740526 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 05:19:38.741899 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 05:19:38.741999 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 05:19:38.745146 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 05:19:38.745229 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 05:19:38.745953 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 05:19:38.746030 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 05:19:38.746838 systemd[1]: Stopped target network.target - Network. Dec 13 05:19:38.747495 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 05:19:38.747567 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 05:19:38.749224 systemd[1]: Stopped target paths.target - Path Units. Dec 13 05:19:38.749864 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 05:19:38.750734 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:19:38.751625 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 05:19:38.753125 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 05:19:38.754687 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 05:19:38.754761 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 05:19:38.755986 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 05:19:38.756048 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 05:19:38.757515 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 05:19:38.757593 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 05:19:38.759173 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 05:19:38.759265 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 05:19:38.760852 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 05:19:38.763382 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 05:19:38.766970 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 05:19:38.767399 systemd-networkd[774]: eth0: DHCPv6 lease lost Dec 13 05:19:38.771647 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 05:19:38.771788 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 05:19:38.775707 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 05:19:38.775881 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 05:19:38.779848 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 05:19:38.780070 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 05:19:38.783527 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 05:19:38.783642 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:19:38.785202 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 05:19:38.785280 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 05:19:38.793335 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 05:19:38.794698 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 05:19:38.794800 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 05:19:38.797974 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 05:19:38.798059 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:19:38.800335 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 05:19:38.800405 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 05:19:38.801930 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 05:19:38.802005 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:19:38.803673 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:19:38.823916 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 05:19:38.825087 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:19:38.827226 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 05:19:38.827376 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 05:19:38.831221 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 05:19:38.831315 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 05:19:38.832947 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 05:19:38.833003 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:19:38.834460 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 05:19:38.834551 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 05:19:38.836777 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 05:19:38.836852 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 05:19:38.838264 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 05:19:38.838350 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:19:38.845444 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 05:19:38.847229 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 05:19:38.847345 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:19:38.848228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 05:19:38.848306 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:19:38.860792 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 05:19:38.860977 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 05:19:38.863820 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 05:19:38.870454 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 05:19:38.883878 systemd[1]: Switching root. Dec 13 05:19:38.920525 systemd-journald[200]: Journal stopped Dec 13 05:19:40.547898 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Dec 13 05:19:40.548045 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 05:19:40.548083 kernel: SELinux: policy capability open_perms=1 Dec 13 05:19:40.548120 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 05:19:40.548154 kernel: SELinux: policy capability always_check_network=0 Dec 13 05:19:40.548174 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 05:19:40.548206 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 05:19:40.548234 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 05:19:40.548254 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 05:19:40.548272 kernel: audit: type=1403 audit(1734067179.281:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 05:19:40.548293 systemd[1]: Successfully loaded SELinux policy in 66.030ms. Dec 13 05:19:40.548317 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.715ms. Dec 13 05:19:40.548351 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 05:19:40.548374 systemd[1]: Detected virtualization kvm. Dec 13 05:19:40.548395 systemd[1]: Detected architecture x86-64. Dec 13 05:19:40.548415 systemd[1]: Detected first boot. Dec 13 05:19:40.548442 systemd[1]: Hostname set to . Dec 13 05:19:40.548473 systemd[1]: Initializing machine ID from VM UUID. Dec 13 05:19:40.548495 zram_generator::config[1056]: No configuration found. Dec 13 05:19:40.548523 systemd[1]: Populated /etc with preset unit settings. Dec 13 05:19:40.548560 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 05:19:40.548583 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 05:19:40.548603 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 05:19:40.548625 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 05:19:40.548645 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 05:19:40.548667 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 05:19:40.548696 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 05:19:40.548723 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 05:19:40.548745 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 05:19:40.548779 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 05:19:40.548801 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 05:19:40.548821 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:19:40.548841 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:19:40.548861 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 05:19:40.548881 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 05:19:40.548901 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 05:19:40.548922 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 05:19:40.548948 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 05:19:40.548986 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:19:40.549008 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 05:19:40.549030 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 05:19:40.549051 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 05:19:40.549072 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 05:19:40.549113 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:19:40.549167 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 05:19:40.549204 systemd[1]: Reached target slices.target - Slice Units. Dec 13 05:19:40.549248 systemd[1]: Reached target swap.target - Swaps. Dec 13 05:19:40.549271 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 05:19:40.549292 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 05:19:40.549312 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:19:40.549344 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 05:19:40.549372 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:19:40.549393 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 05:19:40.549420 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 05:19:40.549465 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 05:19:40.549488 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 05:19:40.549516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:19:40.549537 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 05:19:40.549558 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 05:19:40.549594 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 05:19:40.549618 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 05:19:40.549638 systemd[1]: Reached target machines.target - Containers. Dec 13 05:19:40.549659 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 05:19:40.549680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:19:40.549700 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 05:19:40.549720 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 05:19:40.549741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 05:19:40.549774 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 05:19:40.549795 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 05:19:40.549823 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 05:19:40.549844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 05:19:40.549871 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 05:19:40.549892 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 05:19:40.549913 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 05:19:40.549932 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 05:19:40.549977 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 05:19:40.550000 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 05:19:40.550020 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 05:19:40.550040 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 05:19:40.550060 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 05:19:40.550079 kernel: ACPI: bus type drm_connector registered Dec 13 05:19:40.550140 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 05:19:40.550169 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 05:19:40.550189 systemd[1]: Stopped verity-setup.service. Dec 13 05:19:40.550210 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:19:40.550245 kernel: fuse: init (API version 7.39) Dec 13 05:19:40.550266 kernel: loop: module loaded Dec 13 05:19:40.550285 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 05:19:40.550352 systemd-journald[1149]: Collecting audit messages is disabled. Dec 13 05:19:40.550407 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 05:19:40.550443 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 05:19:40.550489 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 05:19:40.550514 systemd-journald[1149]: Journal started Dec 13 05:19:40.550547 systemd-journald[1149]: Runtime Journal (/run/log/journal/256cb57081ed460d9fedde7da1bbb8f2) is 4.7M, max 38.0M, 33.2M free. Dec 13 05:19:40.099150 systemd[1]: Queued start job for default target multi-user.target. Dec 13 05:19:40.124759 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 05:19:40.125517 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 05:19:40.553230 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 05:19:40.554909 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 05:19:40.555897 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 05:19:40.557133 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 05:19:40.558442 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:19:40.559894 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 05:19:40.560170 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 05:19:40.561848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 05:19:40.562117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 05:19:40.563326 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 05:19:40.563588 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 05:19:40.564727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 05:19:40.564970 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 05:19:40.566475 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 05:19:40.566699 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 05:19:40.568227 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 05:19:40.568517 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 05:19:40.569877 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 05:19:40.571035 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 05:19:40.572388 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 05:19:40.588717 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 05:19:40.596223 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 05:19:40.616266 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 05:19:40.617325 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 05:19:40.617390 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 05:19:40.621842 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 05:19:40.630484 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 05:19:40.635227 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 05:19:40.637859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:19:40.646833 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 05:19:40.656396 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 05:19:40.657460 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 05:19:40.665478 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 05:19:40.666414 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 05:19:40.675850 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 05:19:40.687754 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 05:19:40.699418 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 05:19:40.705394 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 05:19:40.711709 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 05:19:40.718373 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 05:19:40.744867 systemd-journald[1149]: Time spent on flushing to /var/log/journal/256cb57081ed460d9fedde7da1bbb8f2 is 119.947ms for 1145 entries. Dec 13 05:19:40.744867 systemd-journald[1149]: System Journal (/var/log/journal/256cb57081ed460d9fedde7da1bbb8f2) is 8.0M, max 584.8M, 576.8M free. Dec 13 05:19:40.904583 systemd-journald[1149]: Received client request to flush runtime journal. Dec 13 05:19:40.904653 kernel: loop0: detected capacity change from 0 to 8 Dec 13 05:19:40.904688 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 05:19:40.904713 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 05:19:40.904737 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 05:19:40.771063 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 05:19:40.776035 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 05:19:40.792494 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 05:19:40.824838 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:19:40.892686 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 05:19:40.900518 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 05:19:40.902217 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 05:19:40.913352 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 05:19:40.928911 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:19:40.931057 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 05:19:40.950344 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 05:19:40.978038 kernel: loop3: detected capacity change from 0 to 210664 Dec 13 05:19:40.991506 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Dec 13 05:19:40.991536 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Dec 13 05:19:41.006235 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 05:19:41.016159 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:19:41.038651 kernel: loop4: detected capacity change from 0 to 8 Dec 13 05:19:41.045143 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 05:19:41.079512 kernel: loop6: detected capacity change from 0 to 140768 Dec 13 05:19:41.106142 kernel: loop7: detected capacity change from 0 to 210664 Dec 13 05:19:41.124977 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 05:19:41.125935 (sd-merge)[1214]: Merged extensions into '/usr'. Dec 13 05:19:41.139600 systemd[1]: Reloading requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 05:19:41.139626 systemd[1]: Reloading... Dec 13 05:19:41.328133 zram_generator::config[1240]: No configuration found. Dec 13 05:19:41.457518 ldconfig[1184]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 05:19:41.614570 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:19:41.681963 systemd[1]: Reloading finished in 540 ms. Dec 13 05:19:41.716630 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 05:19:41.725362 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 05:19:41.741857 systemd[1]: Starting ensure-sysext.service... Dec 13 05:19:41.752379 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 05:19:41.765446 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Dec 13 05:19:41.765489 systemd[1]: Reloading... Dec 13 05:19:41.808747 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 05:19:41.809357 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 05:19:41.810925 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 05:19:41.813454 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Dec 13 05:19:41.813575 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Dec 13 05:19:41.820535 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 05:19:41.820553 systemd-tmpfiles[1297]: Skipping /boot Dec 13 05:19:41.840517 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 05:19:41.840538 systemd-tmpfiles[1297]: Skipping /boot Dec 13 05:19:41.867215 zram_generator::config[1324]: No configuration found. Dec 13 05:19:42.077174 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:19:42.146390 systemd[1]: Reloading finished in 380 ms. Dec 13 05:19:42.171135 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 05:19:42.179853 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:19:42.198614 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 05:19:42.208495 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 05:19:42.213538 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 05:19:42.220461 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 05:19:42.231470 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:19:42.236374 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 05:19:42.245701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:19:42.246021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:19:42.257611 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 05:19:42.268742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 05:19:42.275326 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 05:19:42.276371 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:19:42.276561 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:19:42.281374 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:19:42.281681 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:19:42.281929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:19:42.282068 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:19:42.287246 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:19:42.287590 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:19:42.303663 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 05:19:42.304697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:19:42.304909 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:19:42.307961 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 05:19:42.309871 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 05:19:42.310089 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 05:19:42.314053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 05:19:42.314351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 05:19:42.329187 systemd[1]: Finished ensure-sysext.service. Dec 13 05:19:42.351744 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 05:19:42.356719 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 05:19:42.356995 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 05:19:42.361748 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 05:19:42.365440 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 05:19:42.365723 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 05:19:42.375222 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 05:19:42.375354 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 05:19:42.385945 systemd-udevd[1393]: Using default interface naming scheme 'v255'. Dec 13 05:19:42.388473 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 05:19:42.396454 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 05:19:42.403151 augenrules[1417]: No rules Dec 13 05:19:42.405470 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 05:19:42.407166 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 05:19:42.407910 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 05:19:42.430767 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 05:19:42.439080 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:19:42.452415 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 05:19:42.515817 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 05:19:42.642889 systemd-networkd[1430]: lo: Link UP Dec 13 05:19:42.642903 systemd-networkd[1430]: lo: Gained carrier Dec 13 05:19:42.644145 systemd-networkd[1430]: Enumeration completed Dec 13 05:19:42.644313 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 05:19:42.656274 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 05:19:42.685150 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1436) Dec 13 05:19:42.700571 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 05:19:42.701587 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 05:19:42.706374 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 05:19:42.726950 systemd-resolved[1392]: Positive Trust Anchors: Dec 13 05:19:42.727604 systemd-resolved[1392]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 05:19:42.727772 systemd-resolved[1392]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 05:19:42.738151 systemd-resolved[1392]: Using system hostname 'srv-ch81y.gb1.brightbox.com'. Dec 13 05:19:42.740162 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1436) Dec 13 05:19:42.742446 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 05:19:42.743446 systemd[1]: Reached target network.target - Network. Dec 13 05:19:42.745204 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:19:42.770070 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:19:42.770091 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 05:19:42.774503 systemd-networkd[1430]: eth0: Link UP Dec 13 05:19:42.774518 systemd-networkd[1430]: eth0: Gained carrier Dec 13 05:19:42.774538 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:19:42.775171 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1427) Dec 13 05:19:42.793250 systemd-networkd[1430]: eth0: DHCPv4 address 10.244.19.70/30, gateway 10.244.19.69 acquired from 10.244.19.69 Dec 13 05:19:42.796660 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Dec 13 05:19:42.873216 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 05:19:42.879993 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 05:19:42.889151 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 05:19:42.892126 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 05:19:42.914669 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 05:19:42.922173 kernel: ACPI: button: Power Button [PWRF] Dec 13 05:19:42.947171 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 05:19:42.956430 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 05:19:42.956741 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 05:19:42.956944 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 05:19:42.981463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:19:43.081030 systemd-timesyncd[1414]: Contacted time server 131.111.8.60:123 (0.flatcar.pool.ntp.org). Dec 13 05:19:43.081195 systemd-timesyncd[1414]: Initial clock synchronization to Fri 2024-12-13 05:19:43.335615 UTC. Dec 13 05:19:43.186483 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:19:43.206170 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 05:19:43.214433 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 05:19:43.241196 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 05:19:43.273772 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 05:19:43.275042 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:19:43.275863 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 05:19:43.276795 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 05:19:43.277847 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 05:19:43.279018 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 05:19:43.279937 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 05:19:43.280769 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 05:19:43.281569 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 05:19:43.281619 systemd[1]: Reached target paths.target - Path Units. Dec 13 05:19:43.282282 systemd[1]: Reached target timers.target - Timer Units. Dec 13 05:19:43.285447 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 05:19:43.289418 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 05:19:43.295426 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 05:19:43.298399 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 05:19:43.299945 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 05:19:43.300825 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 05:19:43.301533 systemd[1]: Reached target basic.target - Basic System. Dec 13 05:19:43.302280 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 05:19:43.302343 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 05:19:43.309263 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 05:19:43.315335 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 05:19:43.319386 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 05:19:43.325293 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 05:19:43.328199 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 05:19:43.334515 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 05:19:43.336187 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 05:19:43.339166 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 05:19:43.345270 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 05:19:43.357392 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 05:19:43.373148 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 05:19:43.381846 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 05:19:43.383962 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 05:19:43.385908 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 05:19:43.387673 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 05:19:43.397322 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 05:19:43.399706 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 05:19:43.410346 jq[1477]: false Dec 13 05:19:43.413807 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 05:19:43.414163 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 05:19:43.415640 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 05:19:43.415893 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 05:19:43.464120 jq[1487]: true Dec 13 05:19:43.478068 (ntainerd)[1505]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 05:19:43.487536 extend-filesystems[1478]: Found loop4 Dec 13 05:19:43.495577 extend-filesystems[1478]: Found loop5 Dec 13 05:19:43.495577 extend-filesystems[1478]: Found loop6 Dec 13 05:19:43.495577 extend-filesystems[1478]: Found loop7 Dec 13 05:19:43.495577 extend-filesystems[1478]: Found vda Dec 13 05:19:43.495577 extend-filesystems[1478]: Found vda1 Dec 13 05:19:43.495577 extend-filesystems[1478]: Found vda2 Dec 13 05:19:43.495577 extend-filesystems[1478]: Found vda3 Dec 13 05:19:43.495577 extend-filesystems[1478]: Found usr Dec 13 05:19:43.495577 extend-filesystems[1478]: Found vda4 Dec 13 05:19:43.495577 extend-filesystems[1478]: Found vda6 Dec 13 05:19:43.495577 extend-filesystems[1478]: Found vda7 Dec 13 05:19:43.495577 extend-filesystems[1478]: Found vda9 Dec 13 05:19:43.495577 extend-filesystems[1478]: Checking size of /dev/vda9 Dec 13 05:19:43.500272 dbus-daemon[1476]: [system] SELinux support is enabled Dec 13 05:19:43.502524 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 05:19:43.521389 dbus-daemon[1476]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1430 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 05:19:43.549361 update_engine[1486]: I20241213 05:19:43.548661 1486 main.cc:92] Flatcar Update Engine starting Dec 13 05:19:43.502812 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 05:19:43.555356 jq[1508]: true Dec 13 05:19:43.537776 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 05:19:43.514053 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 05:19:43.527553 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 05:19:43.527618 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 05:19:43.531723 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 05:19:43.531776 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 05:19:43.553637 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 05:19:43.560150 tar[1499]: linux-amd64/helm Dec 13 05:19:43.579148 systemd[1]: Started update-engine.service - Update Engine. Dec 13 05:19:43.588499 update_engine[1486]: I20241213 05:19:43.588217 1486 update_check_scheduler.cc:74] Next update check in 4m24s Dec 13 05:19:43.594410 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 05:19:43.640622 extend-filesystems[1478]: Resized partition /dev/vda9 Dec 13 05:19:43.660129 extend-filesystems[1523]: resize2fs 1.47.1 (20-May-2024) Dec 13 05:19:43.678156 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 05:19:43.778185 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 05:19:43.779132 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1427) Dec 13 05:19:43.780863 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 05:19:43.783898 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1513 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 05:19:43.794757 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 05:19:43.829950 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 05:19:43.866956 sshd_keygen[1506]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 05:19:43.830000 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 05:19:43.837417 systemd-logind[1485]: New seat seat0. Dec 13 05:19:43.841138 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 05:19:43.876319 polkitd[1534]: Started polkitd version 121 Dec 13 05:19:43.895436 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Dec 13 05:19:43.896917 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 05:19:43.914852 systemd[1]: Starting sshkeys.service... Dec 13 05:19:43.923756 polkitd[1534]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 05:19:43.926431 polkitd[1534]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 05:19:43.933529 polkitd[1534]: Finished loading, compiling and executing 2 rules Dec 13 05:19:43.934749 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 05:19:43.935608 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 05:19:43.936815 polkitd[1534]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 05:19:43.985761 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 05:19:43.986522 systemd-hostnamed[1513]: Hostname set to (static) Dec 13 05:19:43.997590 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 05:19:44.000043 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 05:19:44.013408 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 05:19:44.060203 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 05:19:44.060573 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 05:19:44.076988 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 05:19:44.084376 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 05:19:44.102764 extend-filesystems[1523]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 05:19:44.102764 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 05:19:44.102764 extend-filesystems[1523]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 05:19:44.110262 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Dec 13 05:19:44.105498 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 05:19:44.106566 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 05:19:44.116452 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 05:19:44.116757 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 05:19:44.125805 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 05:19:44.136855 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 05:19:44.139045 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 05:19:44.142940 containerd[1505]: time="2024-12-13T05:19:44.142816750Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 05:19:44.176983 containerd[1505]: time="2024-12-13T05:19:44.176516671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:19:44.180209 containerd[1505]: time="2024-12-13T05:19:44.179318247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:19:44.180209 containerd[1505]: time="2024-12-13T05:19:44.179367443Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 05:19:44.180209 containerd[1505]: time="2024-12-13T05:19:44.179395813Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 05:19:44.180209 containerd[1505]: time="2024-12-13T05:19:44.179691766Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 05:19:44.180209 containerd[1505]: time="2024-12-13T05:19:44.179730299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 05:19:44.180209 containerd[1505]: time="2024-12-13T05:19:44.179849974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:19:44.180209 containerd[1505]: time="2024-12-13T05:19:44.179873939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:19:44.180209 containerd[1505]: time="2024-12-13T05:19:44.180153579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:19:44.180209 containerd[1505]: time="2024-12-13T05:19:44.180181367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 05:19:44.180209 containerd[1505]: time="2024-12-13T05:19:44.180204659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:19:44.180209 containerd[1505]: time="2024-12-13T05:19:44.180221716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 05:19:44.180662 containerd[1505]: time="2024-12-13T05:19:44.180365963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:19:44.180775 containerd[1505]: time="2024-12-13T05:19:44.180741493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:19:44.180940 containerd[1505]: time="2024-12-13T05:19:44.180907407Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:19:44.181005 containerd[1505]: time="2024-12-13T05:19:44.180940426Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 05:19:44.181376 containerd[1505]: time="2024-12-13T05:19:44.181095684Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 05:19:44.181376 containerd[1505]: time="2024-12-13T05:19:44.181224163Z" level=info msg="metadata content store policy set" policy=shared Dec 13 05:19:44.188383 containerd[1505]: time="2024-12-13T05:19:44.188053949Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 05:19:44.188383 containerd[1505]: time="2024-12-13T05:19:44.188181161Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 05:19:44.188383 containerd[1505]: time="2024-12-13T05:19:44.188228071Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 05:19:44.188383 containerd[1505]: time="2024-12-13T05:19:44.188255242Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 05:19:44.188383 containerd[1505]: time="2024-12-13T05:19:44.188287252Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 05:19:44.188710 containerd[1505]: time="2024-12-13T05:19:44.188542511Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 05:19:44.189389 containerd[1505]: time="2024-12-13T05:19:44.188958325Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 05:19:44.189389 containerd[1505]: time="2024-12-13T05:19:44.189182114Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 05:19:44.189389 containerd[1505]: time="2024-12-13T05:19:44.189211042Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 05:19:44.189389 containerd[1505]: time="2024-12-13T05:19:44.189246293Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 05:19:44.189389 containerd[1505]: time="2024-12-13T05:19:44.189275670Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 05:19:44.189389 containerd[1505]: time="2024-12-13T05:19:44.189298229Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 05:19:44.189389 containerd[1505]: time="2024-12-13T05:19:44.189318521Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 05:19:44.189389 containerd[1505]: time="2024-12-13T05:19:44.189340457Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 05:19:44.189389 containerd[1505]: time="2024-12-13T05:19:44.189365717Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 05:19:44.189389 containerd[1505]: time="2024-12-13T05:19:44.189387311Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189406655Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189426893Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189474942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189499675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189525938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189550786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189570783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189590723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189609324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189638144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189661911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189684565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189705202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190026 containerd[1505]: time="2024-12-13T05:19:44.189724739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190552 containerd[1505]: time="2024-12-13T05:19:44.189746368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190552 containerd[1505]: time="2024-12-13T05:19:44.189770136Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 05:19:44.190552 containerd[1505]: time="2024-12-13T05:19:44.189809893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190552 containerd[1505]: time="2024-12-13T05:19:44.189832740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190552 containerd[1505]: time="2024-12-13T05:19:44.189850698Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 05:19:44.190552 containerd[1505]: time="2024-12-13T05:19:44.189928094Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 05:19:44.190552 containerd[1505]: time="2024-12-13T05:19:44.189968507Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 05:19:44.190552 containerd[1505]: time="2024-12-13T05:19:44.189989453Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 05:19:44.190552 containerd[1505]: time="2024-12-13T05:19:44.190009067Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 05:19:44.190552 containerd[1505]: time="2024-12-13T05:19:44.190025733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190552 containerd[1505]: time="2024-12-13T05:19:44.190050786Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 05:19:44.190552 containerd[1505]: time="2024-12-13T05:19:44.190073157Z" level=info msg="NRI interface is disabled by configuration." Dec 13 05:19:44.190552 containerd[1505]: time="2024-12-13T05:19:44.190098434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 05:19:44.190982 containerd[1505]: time="2024-12-13T05:19:44.190634516Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 05:19:44.190982 containerd[1505]: time="2024-12-13T05:19:44.190755097Z" level=info msg="Connect containerd service" Dec 13 05:19:44.190982 containerd[1505]: time="2024-12-13T05:19:44.190822903Z" level=info msg="using legacy CRI server" Dec 13 05:19:44.190982 containerd[1505]: time="2024-12-13T05:19:44.190839512Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 05:19:44.193648 containerd[1505]: time="2024-12-13T05:19:44.190999764Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 05:19:44.193648 containerd[1505]: time="2024-12-13T05:19:44.192179109Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 05:19:44.193648 containerd[1505]: time="2024-12-13T05:19:44.192832299Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 05:19:44.193648 containerd[1505]: time="2024-12-13T05:19:44.192929272Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 05:19:44.193648 containerd[1505]: time="2024-12-13T05:19:44.193002956Z" level=info msg="Start subscribing containerd event" Dec 13 05:19:44.193648 containerd[1505]: time="2024-12-13T05:19:44.193115472Z" level=info msg="Start recovering state" Dec 13 05:19:44.196161 containerd[1505]: time="2024-12-13T05:19:44.195014150Z" level=info msg="Start event monitor" Dec 13 05:19:44.196161 containerd[1505]: time="2024-12-13T05:19:44.196079809Z" level=info msg="Start snapshots syncer" Dec 13 05:19:44.196161 containerd[1505]: time="2024-12-13T05:19:44.196113566Z" level=info msg="Start cni network conf syncer for default" Dec 13 05:19:44.196327 containerd[1505]: time="2024-12-13T05:19:44.196165473Z" level=info msg="Start streaming server" Dec 13 05:19:44.200550 containerd[1505]: time="2024-12-13T05:19:44.198915199Z" level=info msg="containerd successfully booted in 0.057999s" Dec 13 05:19:44.199044 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 05:19:44.492356 tar[1499]: linux-amd64/LICENSE Dec 13 05:19:44.492713 tar[1499]: linux-amd64/README.md Dec 13 05:19:44.514815 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 05:19:44.547720 systemd-networkd[1430]: eth0: Gained IPv6LL Dec 13 05:19:44.557359 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 05:19:44.559528 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 05:19:44.573492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:19:44.577999 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 05:19:44.613875 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 05:19:44.826032 systemd-networkd[1430]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:4d1:24:19ff:fef4:1346/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:4d1:24:19ff:fef4:1346/64 assigned by NDisc. Dec 13 05:19:44.826046 systemd-networkd[1430]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 05:19:45.558549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:19:45.561113 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:19:46.253705 kubelet[1601]: E1213 05:19:46.253598 1601 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:19:46.256499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:19:46.256757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:19:46.257293 systemd[1]: kubelet.service: Consumed 1.091s CPU time. Dec 13 05:19:46.757336 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 05:19:46.762587 systemd[1]: Started sshd@0-10.244.19.70:22-147.75.109.163:33964.service - OpenSSH per-connection server daemon (147.75.109.163:33964). Dec 13 05:19:47.691811 sshd[1611]: Accepted publickey for core from 147.75.109.163 port 33964 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:19:47.695409 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:19:47.713008 systemd-logind[1485]: New session 1 of user core. Dec 13 05:19:47.716350 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 05:19:47.727883 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 05:19:47.753561 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 05:19:47.762970 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 05:19:47.786503 (systemd)[1616]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 05:19:47.933784 systemd[1616]: Queued start job for default target default.target. Dec 13 05:19:47.946235 systemd[1616]: Created slice app.slice - User Application Slice. Dec 13 05:19:47.946287 systemd[1616]: Reached target paths.target - Paths. Dec 13 05:19:47.946312 systemd[1616]: Reached target timers.target - Timers. Dec 13 05:19:47.948522 systemd[1616]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 05:19:47.971431 systemd[1616]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 05:19:47.971671 systemd[1616]: Reached target sockets.target - Sockets. Dec 13 05:19:47.971698 systemd[1616]: Reached target basic.target - Basic System. Dec 13 05:19:47.971779 systemd[1616]: Reached target default.target - Main User Target. Dec 13 05:19:47.971850 systemd[1616]: Startup finished in 174ms. Dec 13 05:19:47.972182 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 05:19:47.982679 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 05:19:48.636612 systemd[1]: Started sshd@1-10.244.19.70:22-147.75.109.163:33966.service - OpenSSH per-connection server daemon (147.75.109.163:33966). Dec 13 05:19:49.208247 login[1576]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 05:19:49.214231 login[1575]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 05:19:49.217189 systemd-logind[1485]: New session 2 of user core. Dec 13 05:19:49.226639 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 05:19:49.233199 systemd-logind[1485]: New session 3 of user core. Dec 13 05:19:49.238774 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 05:19:49.537189 sshd[1627]: Accepted publickey for core from 147.75.109.163 port 33966 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:19:49.539312 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:19:49.548085 systemd-logind[1485]: New session 4 of user core. Dec 13 05:19:49.558597 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 05:19:50.168400 sshd[1627]: pam_unix(sshd:session): session closed for user core Dec 13 05:19:50.174878 systemd[1]: sshd@1-10.244.19.70:22-147.75.109.163:33966.service: Deactivated successfully. Dec 13 05:19:50.177289 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 05:19:50.178261 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Dec 13 05:19:50.180276 systemd-logind[1485]: Removed session 4. Dec 13 05:19:50.333691 systemd[1]: Started sshd@2-10.244.19.70:22-147.75.109.163:33972.service - OpenSSH per-connection server daemon (147.75.109.163:33972). Dec 13 05:19:50.460254 coreos-metadata[1475]: Dec 13 05:19:50.459 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:19:50.486719 coreos-metadata[1475]: Dec 13 05:19:50.486 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 05:19:50.493292 coreos-metadata[1475]: Dec 13 05:19:50.493 INFO Fetch failed with 404: resource not found Dec 13 05:19:50.493292 coreos-metadata[1475]: Dec 13 05:19:50.493 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 05:19:50.494019 coreos-metadata[1475]: Dec 13 05:19:50.493 INFO Fetch successful Dec 13 05:19:50.494137 coreos-metadata[1475]: Dec 13 05:19:50.494 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 05:19:50.507289 coreos-metadata[1475]: Dec 13 05:19:50.507 INFO Fetch successful Dec 13 05:19:50.507289 coreos-metadata[1475]: Dec 13 05:19:50.507 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 05:19:50.523143 coreos-metadata[1475]: Dec 13 05:19:50.522 INFO Fetch successful Dec 13 05:19:50.523143 coreos-metadata[1475]: Dec 13 05:19:50.523 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 05:19:50.541795 coreos-metadata[1475]: Dec 13 05:19:50.541 INFO Fetch successful Dec 13 05:19:50.541997 coreos-metadata[1475]: Dec 13 05:19:50.541 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 05:19:50.559311 coreos-metadata[1475]: Dec 13 05:19:50.559 INFO Fetch successful Dec 13 05:19:50.589280 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 05:19:50.590663 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 05:19:51.152153 coreos-metadata[1559]: Dec 13 05:19:51.152 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:19:51.174497 coreos-metadata[1559]: Dec 13 05:19:51.174 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 05:19:51.200211 coreos-metadata[1559]: Dec 13 05:19:51.200 INFO Fetch successful Dec 13 05:19:51.200431 coreos-metadata[1559]: Dec 13 05:19:51.200 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 05:19:51.233529 sshd[1661]: Accepted publickey for core from 147.75.109.163 port 33972 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:19:51.235852 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:19:51.246313 systemd-logind[1485]: New session 5 of user core. Dec 13 05:19:51.254637 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 05:19:51.257064 coreos-metadata[1559]: Dec 13 05:19:51.256 INFO Fetch successful Dec 13 05:19:51.258442 unknown[1559]: wrote ssh authorized keys file for user: core Dec 13 05:19:51.285170 update-ssh-keys[1673]: Updated "/home/core/.ssh/authorized_keys" Dec 13 05:19:51.287502 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 05:19:51.291618 systemd[1]: Finished sshkeys.service. Dec 13 05:19:51.293084 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 05:19:51.293378 systemd[1]: Startup finished in 1.504s (kernel) + 17.515s (initrd) + 12.075s (userspace) = 31.095s. Dec 13 05:19:51.863504 sshd[1661]: pam_unix(sshd:session): session closed for user core Dec 13 05:19:51.868399 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Dec 13 05:19:51.869543 systemd[1]: sshd@2-10.244.19.70:22-147.75.109.163:33972.service: Deactivated successfully. Dec 13 05:19:51.871883 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 05:19:51.873685 systemd-logind[1485]: Removed session 5. Dec 13 05:19:52.631655 systemd[1]: Started sshd@3-10.244.19.70:22-46.19.143.66:53724.service - OpenSSH per-connection server daemon (46.19.143.66:53724). Dec 13 05:19:54.126747 sshd[1681]: Connection closed by authenticating user root 46.19.143.66 port 53724 [preauth] Dec 13 05:19:54.129120 systemd[1]: sshd@3-10.244.19.70:22-46.19.143.66:53724.service: Deactivated successfully. Dec 13 05:19:54.221506 systemd[1]: Started sshd@4-10.244.19.70:22-46.19.143.66:33210.service - OpenSSH per-connection server daemon (46.19.143.66:33210). Dec 13 05:19:55.144864 sshd[1686]: Connection closed by authenticating user root 46.19.143.66 port 33210 [preauth] Dec 13 05:19:55.148601 systemd[1]: sshd@4-10.244.19.70:22-46.19.143.66:33210.service: Deactivated successfully. Dec 13 05:19:55.216060 systemd[1]: Started sshd@5-10.244.19.70:22-46.19.143.66:33214.service - OpenSSH per-connection server daemon (46.19.143.66:33214). Dec 13 05:19:56.134358 sshd[1691]: Connection closed by authenticating user root 46.19.143.66 port 33214 [preauth] Dec 13 05:19:56.138170 systemd[1]: sshd@5-10.244.19.70:22-46.19.143.66:33214.service: Deactivated successfully. Dec 13 05:19:56.235492 systemd[1]: Started sshd@6-10.244.19.70:22-46.19.143.66:33218.service - OpenSSH per-connection server daemon (46.19.143.66:33218). Dec 13 05:19:56.507591 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 05:19:56.515509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:19:56.706569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:19:56.706833 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:19:56.773379 kubelet[1706]: E1213 05:19:56.772934 1706 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:19:56.777971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:19:56.778263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:19:57.058715 sshd[1696]: Connection closed by authenticating user root 46.19.143.66 port 33218 [preauth] Dec 13 05:19:57.061575 systemd[1]: sshd@6-10.244.19.70:22-46.19.143.66:33218.service: Deactivated successfully. Dec 13 05:19:57.152911 systemd[1]: Started sshd@7-10.244.19.70:22-46.19.143.66:33230.service - OpenSSH per-connection server daemon (46.19.143.66:33230). Dec 13 05:19:58.265327 sshd[1716]: Connection closed by authenticating user root 46.19.143.66 port 33230 [preauth] Dec 13 05:19:58.268957 systemd[1]: sshd@7-10.244.19.70:22-46.19.143.66:33230.service: Deactivated successfully. Dec 13 05:19:58.316595 systemd[1]: Started sshd@8-10.244.19.70:22-46.19.143.66:33236.service - OpenSSH per-connection server daemon (46.19.143.66:33236). Dec 13 05:19:59.947580 sshd[1721]: Connection closed by authenticating user root 46.19.143.66 port 33236 [preauth] Dec 13 05:19:59.950088 systemd[1]: sshd@8-10.244.19.70:22-46.19.143.66:33236.service: Deactivated successfully. Dec 13 05:20:00.028622 systemd[1]: Started sshd@9-10.244.19.70:22-46.19.143.66:33256.service - OpenSSH per-connection server daemon (46.19.143.66:33256). Dec 13 05:20:02.091153 systemd[1]: Started sshd@10-10.244.19.70:22-147.75.109.163:41066.service - OpenSSH per-connection server daemon (147.75.109.163:41066). Dec 13 05:20:02.985164 sshd[1729]: Accepted publickey for core from 147.75.109.163 port 41066 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:20:02.988213 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:20:02.996626 systemd-logind[1485]: New session 6 of user core. Dec 13 05:20:03.004447 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 05:20:03.615700 sshd[1729]: pam_unix(sshd:session): session closed for user core Dec 13 05:20:03.622893 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Dec 13 05:20:03.624049 systemd[1]: sshd@10-10.244.19.70:22-147.75.109.163:41066.service: Deactivated successfully. Dec 13 05:20:03.627946 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 05:20:03.630039 systemd-logind[1485]: Removed session 6. Dec 13 05:20:03.778302 systemd[1]: Started sshd@11-10.244.19.70:22-147.75.109.163:41072.service - OpenSSH per-connection server daemon (147.75.109.163:41072). Dec 13 05:20:04.677436 sshd[1736]: Accepted publickey for core from 147.75.109.163 port 41072 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:20:04.680740 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:20:04.687807 systemd-logind[1485]: New session 7 of user core. Dec 13 05:20:04.695513 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 05:20:05.301187 sshd[1736]: pam_unix(sshd:session): session closed for user core Dec 13 05:20:05.307378 systemd[1]: sshd@11-10.244.19.70:22-147.75.109.163:41072.service: Deactivated successfully. Dec 13 05:20:05.310777 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 05:20:05.312764 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Dec 13 05:20:05.314571 systemd-logind[1485]: Removed session 7. Dec 13 05:20:05.470201 systemd[1]: Started sshd@12-10.244.19.70:22-147.75.109.163:41082.service - OpenSSH per-connection server daemon (147.75.109.163:41082). Dec 13 05:20:06.361962 sshd[1743]: Accepted publickey for core from 147.75.109.163 port 41082 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:20:06.364431 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:20:06.379961 systemd-logind[1485]: New session 8 of user core. Dec 13 05:20:06.390644 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 05:20:06.809835 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 05:20:06.825567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:20:06.989423 sshd[1743]: pam_unix(sshd:session): session closed for user core Dec 13 05:20:06.990037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:20:06.990345 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:20:06.996710 systemd[1]: sshd@12-10.244.19.70:22-147.75.109.163:41082.service: Deactivated successfully. Dec 13 05:20:07.002099 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 05:20:07.003856 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Dec 13 05:20:07.005892 systemd-logind[1485]: Removed session 8. Dec 13 05:20:07.055376 kubelet[1755]: E1213 05:20:07.055299 1755 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:20:07.058793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:20:07.059091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:20:07.140709 systemd[1]: Started sshd@13-10.244.19.70:22-147.75.109.163:57274.service - OpenSSH per-connection server daemon (147.75.109.163:57274). Dec 13 05:20:08.035777 sshd[1766]: Accepted publickey for core from 147.75.109.163 port 57274 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:20:08.038036 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:20:08.046391 systemd-logind[1485]: New session 9 of user core. Dec 13 05:20:08.053593 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 05:20:08.606560 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 05:20:08.607139 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:20:08.627520 sudo[1769]: pam_unix(sudo:session): session closed for user root Dec 13 05:20:08.773257 sshd[1766]: pam_unix(sshd:session): session closed for user core Dec 13 05:20:08.778458 systemd[1]: sshd@13-10.244.19.70:22-147.75.109.163:57274.service: Deactivated successfully. Dec 13 05:20:08.780976 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 05:20:08.782996 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Dec 13 05:20:08.785238 systemd-logind[1485]: Removed session 9. Dec 13 05:20:08.940749 systemd[1]: Started sshd@14-10.244.19.70:22-147.75.109.163:57290.service - OpenSSH per-connection server daemon (147.75.109.163:57290). Dec 13 05:20:09.827858 sshd[1774]: Accepted publickey for core from 147.75.109.163 port 57290 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:20:09.830950 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:20:09.842488 systemd-logind[1485]: New session 10 of user core. Dec 13 05:20:09.849117 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 05:20:10.314186 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 05:20:10.314734 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:20:10.322914 sudo[1778]: pam_unix(sudo:session): session closed for user root Dec 13 05:20:10.334488 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 05:20:10.335063 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:20:10.367189 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 05:20:10.371139 auditctl[1781]: No rules Dec 13 05:20:10.371762 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 05:20:10.372148 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 05:20:10.380803 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 05:20:10.395882 sshd[1726]: Connection closed by authenticating user root 46.19.143.66 port 33256 [preauth] Dec 13 05:20:10.405276 systemd[1]: sshd@9-10.244.19.70:22-46.19.143.66:33256.service: Deactivated successfully. Dec 13 05:20:10.459163 augenrules[1801]: No rules Dec 13 05:20:10.461866 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 05:20:10.466509 sudo[1777]: pam_unix(sudo:session): session closed for user root Dec 13 05:20:10.501827 systemd[1]: Started sshd@15-10.244.19.70:22-46.19.143.66:55950.service - OpenSSH per-connection server daemon (46.19.143.66:55950). Dec 13 05:20:10.611625 sshd[1774]: pam_unix(sshd:session): session closed for user core Dec 13 05:20:10.617544 systemd[1]: sshd@14-10.244.19.70:22-147.75.109.163:57290.service: Deactivated successfully. Dec 13 05:20:10.622889 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 05:20:10.625614 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Dec 13 05:20:10.627705 systemd-logind[1485]: Removed session 10. Dec 13 05:20:10.789540 systemd[1]: Started sshd@16-10.244.19.70:22-147.75.109.163:57304.service - OpenSSH per-connection server daemon (147.75.109.163:57304). Dec 13 05:20:11.119066 sshd[1807]: Connection closed by authenticating user root 46.19.143.66 port 55950 [preauth] Dec 13 05:20:11.121986 systemd[1]: sshd@15-10.244.19.70:22-46.19.143.66:55950.service: Deactivated successfully. Dec 13 05:20:11.224735 systemd[1]: Started sshd@17-10.244.19.70:22-46.19.143.66:55954.service - OpenSSH per-connection server daemon (46.19.143.66:55954). Dec 13 05:20:11.676029 sshd[1812]: Accepted publickey for core from 147.75.109.163 port 57304 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:20:11.678237 sshd[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:20:11.686523 systemd-logind[1485]: New session 11 of user core. Dec 13 05:20:11.697560 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 05:20:11.930026 sshd[1817]: Connection closed by authenticating user root 46.19.143.66 port 55954 [preauth] Dec 13 05:20:11.933644 systemd[1]: sshd@17-10.244.19.70:22-46.19.143.66:55954.service: Deactivated successfully. Dec 13 05:20:12.019647 systemd[1]: Started sshd@18-10.244.19.70:22-46.19.143.66:55956.service - OpenSSH per-connection server daemon (46.19.143.66:55956). Dec 13 05:20:12.157300 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 05:20:12.157859 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:20:12.406163 sshd[1823]: Connection closed by authenticating user root 46.19.143.66 port 55956 [preauth] Dec 13 05:20:12.409749 systemd[1]: sshd@18-10.244.19.70:22-46.19.143.66:55956.service: Deactivated successfully. Dec 13 05:20:12.469939 systemd[1]: Started sshd@19-10.244.19.70:22-46.19.143.66:55960.service - OpenSSH per-connection server daemon (46.19.143.66:55960). Dec 13 05:20:12.685654 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 05:20:12.686899 (dockerd)[1845]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 05:20:12.865136 sshd[1838]: Connection closed by authenticating user root 46.19.143.66 port 55960 [preauth] Dec 13 05:20:12.865636 systemd[1]: sshd@19-10.244.19.70:22-46.19.143.66:55960.service: Deactivated successfully. Dec 13 05:20:12.914714 systemd[1]: Started sshd@20-10.244.19.70:22-46.19.143.66:55972.service - OpenSSH per-connection server daemon (46.19.143.66:55972). Dec 13 05:20:13.160369 dockerd[1845]: time="2024-12-13T05:20:13.160247203Z" level=info msg="Starting up" Dec 13 05:20:13.280185 sshd[1853]: Connection closed by authenticating user root 46.19.143.66 port 55972 [preauth] Dec 13 05:20:13.290137 systemd[1]: sshd@20-10.244.19.70:22-46.19.143.66:55972.service: Deactivated successfully. Dec 13 05:20:13.323137 dockerd[1845]: time="2024-12-13T05:20:13.322406960Z" level=info msg="Loading containers: start." Dec 13 05:20:13.328797 systemd[1]: Started sshd@21-10.244.19.70:22-46.19.143.66:57734.service - OpenSSH per-connection server daemon (46.19.143.66:57734). Dec 13 05:20:13.495547 kernel: Initializing XFRM netlink socket Dec 13 05:20:13.632542 systemd-networkd[1430]: docker0: Link UP Dec 13 05:20:13.646145 sshd[1873]: Connection closed by authenticating user root 46.19.143.66 port 57734 [preauth] Dec 13 05:20:13.649317 systemd[1]: sshd@21-10.244.19.70:22-46.19.143.66:57734.service: Deactivated successfully. Dec 13 05:20:13.653300 dockerd[1845]: time="2024-12-13T05:20:13.651720800Z" level=info msg="Loading containers: done." Dec 13 05:20:13.678151 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2619671769-merged.mount: Deactivated successfully. Dec 13 05:20:13.681920 dockerd[1845]: time="2024-12-13T05:20:13.681095410Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 05:20:13.681920 dockerd[1845]: time="2024-12-13T05:20:13.681316970Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 05:20:13.681920 dockerd[1845]: time="2024-12-13T05:20:13.681517024Z" level=info msg="Daemon has completed initialization" Dec 13 05:20:13.728285 dockerd[1845]: time="2024-12-13T05:20:13.728145858Z" level=info msg="API listen on /run/docker.sock" Dec 13 05:20:13.729121 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 05:20:14.851702 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 05:20:15.222050 containerd[1505]: time="2024-12-13T05:20:15.221151472Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 05:20:16.178498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179587428.mount: Deactivated successfully. Dec 13 05:20:17.060582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 05:20:17.069515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:20:17.299399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:20:17.302569 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:20:17.409255 kubelet[2071]: E1213 05:20:17.408835 2071 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:20:17.413480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:20:17.413742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:20:18.450441 containerd[1505]: time="2024-12-13T05:20:18.449550264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:18.453241 containerd[1505]: time="2024-12-13T05:20:18.453128187Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675650" Dec 13 05:20:18.453947 containerd[1505]: time="2024-12-13T05:20:18.453322508Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:18.462946 containerd[1505]: time="2024-12-13T05:20:18.462433036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:18.466345 containerd[1505]: time="2024-12-13T05:20:18.465859690Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 3.244578224s" Dec 13 05:20:18.466345 containerd[1505]: time="2024-12-13T05:20:18.465951427Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 05:20:18.503099 containerd[1505]: time="2024-12-13T05:20:18.502729694Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 05:20:20.936419 containerd[1505]: time="2024-12-13T05:20:20.936351536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:20.937831 containerd[1505]: time="2024-12-13T05:20:20.937780865Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606417" Dec 13 05:20:20.938653 containerd[1505]: time="2024-12-13T05:20:20.938587626Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:20.943358 containerd[1505]: time="2024-12-13T05:20:20.943276382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:20.945259 containerd[1505]: time="2024-12-13T05:20:20.945152906Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.442365448s" Dec 13 05:20:20.945259 containerd[1505]: time="2024-12-13T05:20:20.945217026Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 05:20:20.976476 containerd[1505]: time="2024-12-13T05:20:20.976414279Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 05:20:23.283232 containerd[1505]: time="2024-12-13T05:20:23.282341288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:23.284458 containerd[1505]: time="2024-12-13T05:20:23.284403511Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783043" Dec 13 05:20:23.285833 containerd[1505]: time="2024-12-13T05:20:23.285740381Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:23.290157 containerd[1505]: time="2024-12-13T05:20:23.290051595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:23.291996 containerd[1505]: time="2024-12-13T05:20:23.291735023Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 2.315255871s" Dec 13 05:20:23.291996 containerd[1505]: time="2024-12-13T05:20:23.291787099Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 05:20:23.322732 containerd[1505]: time="2024-12-13T05:20:23.322657056Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 05:20:25.067705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount675132616.mount: Deactivated successfully. Dec 13 05:20:25.751356 containerd[1505]: time="2024-12-13T05:20:25.751192207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:25.752460 containerd[1505]: time="2024-12-13T05:20:25.752279284Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Dec 13 05:20:25.753206 containerd[1505]: time="2024-12-13T05:20:25.753137969Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:25.755992 containerd[1505]: time="2024-12-13T05:20:25.755930070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:25.757341 containerd[1505]: time="2024-12-13T05:20:25.757131345Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.434049636s" Dec 13 05:20:25.757341 containerd[1505]: time="2024-12-13T05:20:25.757187851Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 05:20:25.791146 containerd[1505]: time="2024-12-13T05:20:25.790938645Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 05:20:26.444320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2399072127.mount: Deactivated successfully. Dec 13 05:20:27.559934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 05:20:27.574267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:20:27.738390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:20:27.757928 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:20:27.947058 kubelet[2165]: E1213 05:20:27.945141 2165 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:20:27.947456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:20:27.947683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:20:28.189727 containerd[1505]: time="2024-12-13T05:20:28.189665721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:28.191191 containerd[1505]: time="2024-12-13T05:20:28.191147240Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 05:20:28.192042 containerd[1505]: time="2024-12-13T05:20:28.191686740Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:28.196703 containerd[1505]: time="2024-12-13T05:20:28.196617937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:28.199154 containerd[1505]: time="2024-12-13T05:20:28.198173351Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.407179211s" Dec 13 05:20:28.199154 containerd[1505]: time="2024-12-13T05:20:28.198228735Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 05:20:28.227586 containerd[1505]: time="2024-12-13T05:20:28.227516806Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 05:20:28.833600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070765789.mount: Deactivated successfully. Dec 13 05:20:28.839277 containerd[1505]: time="2024-12-13T05:20:28.839199404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:28.845965 containerd[1505]: time="2024-12-13T05:20:28.845881299Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Dec 13 05:20:28.846097 containerd[1505]: time="2024-12-13T05:20:28.846027608Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:28.849898 containerd[1505]: time="2024-12-13T05:20:28.849823976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:28.851131 containerd[1505]: time="2024-12-13T05:20:28.851005201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 623.195745ms" Dec 13 05:20:28.851131 containerd[1505]: time="2024-12-13T05:20:28.851049885Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 05:20:28.880486 containerd[1505]: time="2024-12-13T05:20:28.880379294Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 05:20:29.204313 update_engine[1486]: I20241213 05:20:29.203325 1486 update_attempter.cc:509] Updating boot flags... Dec 13 05:20:29.255153 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2195) Dec 13 05:20:29.376140 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2195) Dec 13 05:20:29.640746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055297999.mount: Deactivated successfully. Dec 13 05:20:34.252327 containerd[1505]: time="2024-12-13T05:20:34.252217376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:34.253765 containerd[1505]: time="2024-12-13T05:20:34.253718136Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Dec 13 05:20:34.255073 containerd[1505]: time="2024-12-13T05:20:34.254042264Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:34.258238 containerd[1505]: time="2024-12-13T05:20:34.258158857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:20:34.260162 containerd[1505]: time="2024-12-13T05:20:34.259938377Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.379508571s" Dec 13 05:20:34.260162 containerd[1505]: time="2024-12-13T05:20:34.259985713Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 05:20:38.050036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 05:20:38.058636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:20:38.073779 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 05:20:38.073895 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 05:20:38.074217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:20:38.077523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:20:38.110228 systemd[1]: Reloading requested from client PID 2312 ('systemctl') (unit session-11.scope)... Dec 13 05:20:38.110681 systemd[1]: Reloading... Dec 13 05:20:38.323198 zram_generator::config[2350]: No configuration found. Dec 13 05:20:38.464392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:20:38.572192 systemd[1]: Reloading finished in 460 ms. Dec 13 05:20:38.793176 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 05:20:38.793337 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 05:20:38.794993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:20:38.803675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:20:39.061381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:20:39.076234 (kubelet)[2416]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 05:20:39.146176 kubelet[2416]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:20:39.146176 kubelet[2416]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 05:20:39.146176 kubelet[2416]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:20:39.158329 kubelet[2416]: I1213 05:20:39.157783 2416 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 05:20:39.632157 kubelet[2416]: I1213 05:20:39.631187 2416 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 05:20:39.632157 kubelet[2416]: I1213 05:20:39.631232 2416 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 05:20:39.632157 kubelet[2416]: I1213 05:20:39.632027 2416 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 05:20:39.658587 kubelet[2416]: I1213 05:20:39.658538 2416 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 05:20:39.664312 kubelet[2416]: E1213 05:20:39.664266 2416 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.19.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:39.679527 kubelet[2416]: I1213 05:20:39.679481 2416 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 05:20:39.682317 kubelet[2416]: I1213 05:20:39.682209 2416 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 05:20:39.683814 kubelet[2416]: I1213 05:20:39.682295 2416 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-ch81y.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 05:20:39.684501 kubelet[2416]: I1213 05:20:39.684455 2416 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 05:20:39.684501 kubelet[2416]: I1213 05:20:39.684491 2416 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 05:20:39.689331 kubelet[2416]: I1213 05:20:39.689268 2416 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:20:39.690409 kubelet[2416]: I1213 05:20:39.690355 2416 kubelet.go:400] "Attempting to sync node with API server" Dec 13 05:20:39.690409 kubelet[2416]: I1213 05:20:39.690393 2416 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 05:20:39.691968 kubelet[2416]: I1213 05:20:39.691574 2416 kubelet.go:312] "Adding apiserver pod source" Dec 13 05:20:39.691968 kubelet[2416]: I1213 05:20:39.691635 2416 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 05:20:39.696883 kubelet[2416]: W1213 05:20:39.696790 2416 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.19.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ch81y.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:39.697056 kubelet[2416]: E1213 05:20:39.697031 2416 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.19.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ch81y.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:39.697300 kubelet[2416]: W1213 05:20:39.697258 2416 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.19.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:39.697434 kubelet[2416]: E1213 05:20:39.697415 2416 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.19.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:39.698715 kubelet[2416]: I1213 05:20:39.698447 2416 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 05:20:39.701327 kubelet[2416]: I1213 05:20:39.700533 2416 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 05:20:39.701327 kubelet[2416]: W1213 05:20:39.700663 2416 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 05:20:39.703929 kubelet[2416]: I1213 05:20:39.703702 2416 server.go:1264] "Started kubelet" Dec 13 05:20:39.706436 kubelet[2416]: I1213 05:20:39.705946 2416 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 05:20:39.710794 kubelet[2416]: I1213 05:20:39.710766 2416 server.go:455] "Adding debug handlers to kubelet server" Dec 13 05:20:39.711763 kubelet[2416]: I1213 05:20:39.711414 2416 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 05:20:39.712050 kubelet[2416]: I1213 05:20:39.712020 2416 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 05:20:39.712653 kubelet[2416]: E1213 05:20:39.712275 2416 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.19.70:6443/api/v1/namespaces/default/events\": dial tcp 10.244.19.70:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-ch81y.gb1.brightbox.com.1810a4f78f2dacf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-ch81y.gb1.brightbox.com,UID:srv-ch81y.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-ch81y.gb1.brightbox.com,},FirstTimestamp:2024-12-13 05:20:39.703653622 +0000 UTC m=+0.620946330,LastTimestamp:2024-12-13 05:20:39.703653622 +0000 UTC m=+0.620946330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-ch81y.gb1.brightbox.com,}" Dec 13 05:20:39.716402 kubelet[2416]: I1213 05:20:39.715776 2416 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 05:20:39.723483 kubelet[2416]: E1213 05:20:39.723446 2416 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 05:20:39.723933 kubelet[2416]: E1213 05:20:39.723908 2416 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-ch81y.gb1.brightbox.com\" not found" Dec 13 05:20:39.724083 kubelet[2416]: I1213 05:20:39.724064 2416 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 05:20:39.725452 kubelet[2416]: I1213 05:20:39.725429 2416 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 05:20:39.726327 kubelet[2416]: I1213 05:20:39.725653 2416 reconciler.go:26] "Reconciler: start to sync state" Dec 13 05:20:39.727032 kubelet[2416]: I1213 05:20:39.727004 2416 factory.go:221] Registration of the systemd container factory successfully Dec 13 05:20:39.728721 kubelet[2416]: I1213 05:20:39.728691 2416 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 05:20:39.731142 kubelet[2416]: E1213 05:20:39.727588 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.19.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ch81y.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.19.70:6443: connect: connection refused" interval="200ms" Dec 13 05:20:39.732206 kubelet[2416]: W1213 05:20:39.727759 2416 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.19.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:39.732297 kubelet[2416]: E1213 05:20:39.732232 2416 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.19.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:39.734909 kubelet[2416]: I1213 05:20:39.733569 2416 factory.go:221] Registration of the containerd container factory successfully Dec 13 05:20:39.756440 kubelet[2416]: I1213 05:20:39.756367 2416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 05:20:39.763267 kubelet[2416]: I1213 05:20:39.763236 2416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 05:20:39.763454 kubelet[2416]: I1213 05:20:39.763434 2416 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 05:20:39.763596 kubelet[2416]: I1213 05:20:39.763576 2416 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 05:20:39.763784 kubelet[2416]: E1213 05:20:39.763747 2416 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 05:20:39.765127 kubelet[2416]: W1213 05:20:39.765053 2416 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.19.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:39.765830 kubelet[2416]: E1213 05:20:39.765807 2416 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.19.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:39.772282 kubelet[2416]: I1213 05:20:39.772253 2416 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 05:20:39.772282 kubelet[2416]: I1213 05:20:39.772279 2416 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 05:20:39.772450 kubelet[2416]: I1213 05:20:39.772325 2416 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:20:39.774194 kubelet[2416]: I1213 05:20:39.774169 2416 policy_none.go:49] "None policy: Start" Dec 13 05:20:39.775183 kubelet[2416]: I1213 05:20:39.775157 2416 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 05:20:39.775436 kubelet[2416]: I1213 05:20:39.775381 2416 state_mem.go:35] "Initializing new in-memory state store" Dec 13 05:20:39.784775 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 05:20:39.804761 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 05:20:39.816554 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 05:20:39.818764 kubelet[2416]: I1213 05:20:39.818468 2416 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 05:20:39.818873 kubelet[2416]: I1213 05:20:39.818786 2416 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 05:20:39.819202 kubelet[2416]: I1213 05:20:39.819015 2416 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 05:20:39.821801 kubelet[2416]: E1213 05:20:39.821775 2416 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-ch81y.gb1.brightbox.com\" not found" Dec 13 05:20:39.828030 kubelet[2416]: I1213 05:20:39.827442 2416 kubelet_node_status.go:73] "Attempting to register node" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:39.828030 kubelet[2416]: E1213 05:20:39.827925 2416 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.19.70:6443/api/v1/nodes\": dial tcp 10.244.19.70:6443: connect: connection refused" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:39.864397 kubelet[2416]: I1213 05:20:39.864277 2416 topology_manager.go:215] "Topology Admit Handler" podUID="bb72411f83caa2db9a0da4ebb56bae07" podNamespace="kube-system" podName="kube-apiserver-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:39.868842 kubelet[2416]: I1213 05:20:39.868645 2416 topology_manager.go:215] "Topology Admit Handler" podUID="fb084671e490a455b4043376a2aad30c" podNamespace="kube-system" podName="kube-controller-manager-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:39.871882 kubelet[2416]: I1213 05:20:39.871839 2416 topology_manager.go:215] "Topology Admit Handler" podUID="a214ab7c918b6b67957afb0ee10ff16b" podNamespace="kube-system" podName="kube-scheduler-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:39.881923 systemd[1]: Created slice kubepods-burstable-podbb72411f83caa2db9a0da4ebb56bae07.slice - libcontainer container kubepods-burstable-podbb72411f83caa2db9a0da4ebb56bae07.slice. Dec 13 05:20:39.902372 systemd[1]: Created slice kubepods-burstable-podfb084671e490a455b4043376a2aad30c.slice - libcontainer container kubepods-burstable-podfb084671e490a455b4043376a2aad30c.slice. Dec 13 05:20:39.911177 systemd[1]: Created slice kubepods-burstable-poda214ab7c918b6b67957afb0ee10ff16b.slice - libcontainer container kubepods-burstable-poda214ab7c918b6b67957afb0ee10ff16b.slice. Dec 13 05:20:39.933698 kubelet[2416]: E1213 05:20:39.933582 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.19.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ch81y.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.19.70:6443: connect: connection refused" interval="400ms" Dec 13 05:20:40.027320 kubelet[2416]: I1213 05:20:40.027076 2416 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fb084671e490a455b4043376a2aad30c-flexvolume-dir\") pod \"kube-controller-manager-srv-ch81y.gb1.brightbox.com\" (UID: \"fb084671e490a455b4043376a2aad30c\") " pod="kube-system/kube-controller-manager-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:40.027320 kubelet[2416]: I1213 05:20:40.027205 2416 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb084671e490a455b4043376a2aad30c-k8s-certs\") pod \"kube-controller-manager-srv-ch81y.gb1.brightbox.com\" (UID: \"fb084671e490a455b4043376a2aad30c\") " pod="kube-system/kube-controller-manager-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:40.027320 kubelet[2416]: I1213 05:20:40.027245 2416 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb084671e490a455b4043376a2aad30c-kubeconfig\") pod \"kube-controller-manager-srv-ch81y.gb1.brightbox.com\" (UID: \"fb084671e490a455b4043376a2aad30c\") " pod="kube-system/kube-controller-manager-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:40.027320 kubelet[2416]: I1213 05:20:40.027274 2416 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb72411f83caa2db9a0da4ebb56bae07-ca-certs\") pod \"kube-apiserver-srv-ch81y.gb1.brightbox.com\" (UID: \"bb72411f83caa2db9a0da4ebb56bae07\") " pod="kube-system/kube-apiserver-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:40.027320 kubelet[2416]: I1213 05:20:40.027303 2416 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb72411f83caa2db9a0da4ebb56bae07-k8s-certs\") pod \"kube-apiserver-srv-ch81y.gb1.brightbox.com\" (UID: \"bb72411f83caa2db9a0da4ebb56bae07\") " pod="kube-system/kube-apiserver-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:40.027726 kubelet[2416]: I1213 05:20:40.027347 2416 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb72411f83caa2db9a0da4ebb56bae07-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ch81y.gb1.brightbox.com\" (UID: \"bb72411f83caa2db9a0da4ebb56bae07\") " pod="kube-system/kube-apiserver-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:40.027726 kubelet[2416]: I1213 05:20:40.027381 2416 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb084671e490a455b4043376a2aad30c-ca-certs\") pod \"kube-controller-manager-srv-ch81y.gb1.brightbox.com\" (UID: \"fb084671e490a455b4043376a2aad30c\") " pod="kube-system/kube-controller-manager-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:40.027726 kubelet[2416]: I1213 05:20:40.027411 2416 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb084671e490a455b4043376a2aad30c-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ch81y.gb1.brightbox.com\" (UID: \"fb084671e490a455b4043376a2aad30c\") " pod="kube-system/kube-controller-manager-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:40.027726 kubelet[2416]: I1213 05:20:40.027443 2416 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a214ab7c918b6b67957afb0ee10ff16b-kubeconfig\") pod \"kube-scheduler-srv-ch81y.gb1.brightbox.com\" (UID: \"a214ab7c918b6b67957afb0ee10ff16b\") " pod="kube-system/kube-scheduler-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:40.031785 kubelet[2416]: I1213 05:20:40.031746 2416 kubelet_node_status.go:73] "Attempting to register node" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:40.032432 kubelet[2416]: E1213 05:20:40.032304 2416 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.19.70:6443/api/v1/nodes\": dial tcp 10.244.19.70:6443: connect: connection refused" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:40.201458 containerd[1505]: time="2024-12-13T05:20:40.201196493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ch81y.gb1.brightbox.com,Uid:bb72411f83caa2db9a0da4ebb56bae07,Namespace:kube-system,Attempt:0,}" Dec 13 05:20:40.215781 containerd[1505]: time="2024-12-13T05:20:40.215346545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ch81y.gb1.brightbox.com,Uid:fb084671e490a455b4043376a2aad30c,Namespace:kube-system,Attempt:0,}" Dec 13 05:20:40.217855 containerd[1505]: time="2024-12-13T05:20:40.217131370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ch81y.gb1.brightbox.com,Uid:a214ab7c918b6b67957afb0ee10ff16b,Namespace:kube-system,Attempt:0,}" Dec 13 05:20:40.335000 kubelet[2416]: E1213 05:20:40.334857 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.19.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ch81y.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.19.70:6443: connect: connection refused" interval="800ms" Dec 13 05:20:40.437514 kubelet[2416]: I1213 05:20:40.437468 2416 kubelet_node_status.go:73] "Attempting to register node" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:40.438018 kubelet[2416]: E1213 05:20:40.437938 2416 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.19.70:6443/api/v1/nodes\": dial tcp 10.244.19.70:6443: connect: connection refused" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:40.547942 kubelet[2416]: W1213 05:20:40.547461 2416 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.19.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:40.547942 kubelet[2416]: E1213 05:20:40.547539 2416 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.19.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:40.684520 kubelet[2416]: W1213 05:20:40.684448 2416 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.19.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:40.684520 kubelet[2416]: E1213 05:20:40.684520 2416 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.19.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:40.877609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1362748722.mount: Deactivated successfully. Dec 13 05:20:40.909578 containerd[1505]: time="2024-12-13T05:20:40.908024618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:20:40.917337 containerd[1505]: time="2024-12-13T05:20:40.917256058Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 05:20:40.917509 kubelet[2416]: W1213 05:20:40.917398 2416 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.19.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ch81y.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:40.917509 kubelet[2416]: E1213 05:20:40.917487 2416 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.19.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ch81y.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:40.919125 containerd[1505]: time="2024-12-13T05:20:40.918761050Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:20:40.920217 containerd[1505]: time="2024-12-13T05:20:40.920173426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 05:20:40.927858 containerd[1505]: time="2024-12-13T05:20:40.927811234Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:20:40.929357 containerd[1505]: time="2024-12-13T05:20:40.929271370Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:20:40.930849 containerd[1505]: time="2024-12-13T05:20:40.930755534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 05:20:40.933358 containerd[1505]: time="2024-12-13T05:20:40.933275089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:20:40.936909 containerd[1505]: time="2024-12-13T05:20:40.936645028Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 721.173117ms" Dec 13 05:20:40.940212 containerd[1505]: time="2024-12-13T05:20:40.940084343Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 738.766686ms" Dec 13 05:20:40.941453 containerd[1505]: time="2024-12-13T05:20:40.941382508Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 724.15703ms" Dec 13 05:20:40.988334 kubelet[2416]: W1213 05:20:40.988032 2416 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.19.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:40.988334 kubelet[2416]: E1213 05:20:40.988334 2416 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.19.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:41.136171 kubelet[2416]: E1213 05:20:41.135653 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.19.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ch81y.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.19.70:6443: connect: connection refused" interval="1.6s" Dec 13 05:20:41.163058 containerd[1505]: time="2024-12-13T05:20:41.162854864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:20:41.163058 containerd[1505]: time="2024-12-13T05:20:41.163009030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:20:41.163761 containerd[1505]: time="2024-12-13T05:20:41.163037126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:20:41.163761 containerd[1505]: time="2024-12-13T05:20:41.163297239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:20:41.178817 containerd[1505]: time="2024-12-13T05:20:41.178664043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:20:41.179043 containerd[1505]: time="2024-12-13T05:20:41.178777907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:20:41.179043 containerd[1505]: time="2024-12-13T05:20:41.178805353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:20:41.179043 containerd[1505]: time="2024-12-13T05:20:41.178938980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:20:41.185307 containerd[1505]: time="2024-12-13T05:20:41.184971600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:20:41.185307 containerd[1505]: time="2024-12-13T05:20:41.185058142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:20:41.187870 containerd[1505]: time="2024-12-13T05:20:41.187178090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:20:41.187870 containerd[1505]: time="2024-12-13T05:20:41.187413776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:20:41.226519 systemd[1]: Started cri-containerd-9811d20b1aa9082ca5330102ca439acac008a7507de7de04743fd56f8dca9b8d.scope - libcontainer container 9811d20b1aa9082ca5330102ca439acac008a7507de7de04743fd56f8dca9b8d. Dec 13 05:20:41.242261 systemd[1]: Started cri-containerd-0e162bce91dfe524ed20d6f0302f8d16091ceb0e7e74f6b32f80093be916a11b.scope - libcontainer container 0e162bce91dfe524ed20d6f0302f8d16091ceb0e7e74f6b32f80093be916a11b. Dec 13 05:20:41.279477 kubelet[2416]: I1213 05:20:41.245440 2416 kubelet_node_status.go:73] "Attempting to register node" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:41.279477 kubelet[2416]: E1213 05:20:41.245984 2416 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.19.70:6443/api/v1/nodes\": dial tcp 10.244.19.70:6443: connect: connection refused" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:41.254935 systemd[1]: Started cri-containerd-95e72cf22e79d9d007736f51d331ad217cde50644690b8e3d794dddc710f4be3.scope - libcontainer container 95e72cf22e79d9d007736f51d331ad217cde50644690b8e3d794dddc710f4be3. Dec 13 05:20:41.342140 containerd[1505]: time="2024-12-13T05:20:41.341562825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ch81y.gb1.brightbox.com,Uid:bb72411f83caa2db9a0da4ebb56bae07,Namespace:kube-system,Attempt:0,} returns sandbox id \"9811d20b1aa9082ca5330102ca439acac008a7507de7de04743fd56f8dca9b8d\"" Dec 13 05:20:41.362074 containerd[1505]: time="2024-12-13T05:20:41.361683930Z" level=info msg="CreateContainer within sandbox \"9811d20b1aa9082ca5330102ca439acac008a7507de7de04743fd56f8dca9b8d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 05:20:41.363184 containerd[1505]: time="2024-12-13T05:20:41.362816595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ch81y.gb1.brightbox.com,Uid:fb084671e490a455b4043376a2aad30c,Namespace:kube-system,Attempt:0,} returns sandbox id \"95e72cf22e79d9d007736f51d331ad217cde50644690b8e3d794dddc710f4be3\"" Dec 13 05:20:41.370688 containerd[1505]: time="2024-12-13T05:20:41.370355931Z" level=info msg="CreateContainer within sandbox \"95e72cf22e79d9d007736f51d331ad217cde50644690b8e3d794dddc710f4be3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 05:20:41.400883 containerd[1505]: time="2024-12-13T05:20:41.400071888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ch81y.gb1.brightbox.com,Uid:a214ab7c918b6b67957afb0ee10ff16b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e162bce91dfe524ed20d6f0302f8d16091ceb0e7e74f6b32f80093be916a11b\"" Dec 13 05:20:41.401888 containerd[1505]: time="2024-12-13T05:20:41.401653299Z" level=info msg="CreateContainer within sandbox \"9811d20b1aa9082ca5330102ca439acac008a7507de7de04743fd56f8dca9b8d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa3d48dbd0f1774213250d13623aea57251167c1205c0a4f7bb04736acd19d11\"" Dec 13 05:20:41.403064 containerd[1505]: time="2024-12-13T05:20:41.402910070Z" level=info msg="CreateContainer within sandbox \"95e72cf22e79d9d007736f51d331ad217cde50644690b8e3d794dddc710f4be3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"70461898ede312715c549835ccfca1c42c33e5eecde0c6f1d3d7572b2ff9fdca\"" Dec 13 05:20:41.403311 containerd[1505]: time="2024-12-13T05:20:41.403194852Z" level=info msg="StartContainer for \"aa3d48dbd0f1774213250d13623aea57251167c1205c0a4f7bb04736acd19d11\"" Dec 13 05:20:41.405529 containerd[1505]: time="2024-12-13T05:20:41.404024293Z" level=info msg="StartContainer for \"70461898ede312715c549835ccfca1c42c33e5eecde0c6f1d3d7572b2ff9fdca\"" Dec 13 05:20:41.410818 containerd[1505]: time="2024-12-13T05:20:41.410761379Z" level=info msg="CreateContainer within sandbox \"0e162bce91dfe524ed20d6f0302f8d16091ceb0e7e74f6b32f80093be916a11b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 05:20:41.443593 containerd[1505]: time="2024-12-13T05:20:41.443528672Z" level=info msg="CreateContainer within sandbox \"0e162bce91dfe524ed20d6f0302f8d16091ceb0e7e74f6b32f80093be916a11b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"58beb1d3ac50ddff0d11c5e3bbb881834a8003dd0bff737be12854d62121f0b7\"" Dec 13 05:20:41.445991 containerd[1505]: time="2024-12-13T05:20:41.445953809Z" level=info msg="StartContainer for \"58beb1d3ac50ddff0d11c5e3bbb881834a8003dd0bff737be12854d62121f0b7\"" Dec 13 05:20:41.458621 systemd[1]: Started cri-containerd-70461898ede312715c549835ccfca1c42c33e5eecde0c6f1d3d7572b2ff9fdca.scope - libcontainer container 70461898ede312715c549835ccfca1c42c33e5eecde0c6f1d3d7572b2ff9fdca. Dec 13 05:20:41.480873 systemd[1]: Started cri-containerd-aa3d48dbd0f1774213250d13623aea57251167c1205c0a4f7bb04736acd19d11.scope - libcontainer container aa3d48dbd0f1774213250d13623aea57251167c1205c0a4f7bb04736acd19d11. Dec 13 05:20:41.520017 systemd[1]: Started cri-containerd-58beb1d3ac50ddff0d11c5e3bbb881834a8003dd0bff737be12854d62121f0b7.scope - libcontainer container 58beb1d3ac50ddff0d11c5e3bbb881834a8003dd0bff737be12854d62121f0b7. Dec 13 05:20:41.577825 containerd[1505]: time="2024-12-13T05:20:41.577758879Z" level=info msg="StartContainer for \"70461898ede312715c549835ccfca1c42c33e5eecde0c6f1d3d7572b2ff9fdca\" returns successfully" Dec 13 05:20:41.622692 containerd[1505]: time="2024-12-13T05:20:41.622618691Z" level=info msg="StartContainer for \"aa3d48dbd0f1774213250d13623aea57251167c1205c0a4f7bb04736acd19d11\" returns successfully" Dec 13 05:20:41.644385 containerd[1505]: time="2024-12-13T05:20:41.644278531Z" level=info msg="StartContainer for \"58beb1d3ac50ddff0d11c5e3bbb881834a8003dd0bff737be12854d62121f0b7\" returns successfully" Dec 13 05:20:41.689584 kubelet[2416]: E1213 05:20:41.689403 2416 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.19.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.19.70:6443: connect: connection refused Dec 13 05:20:42.851870 kubelet[2416]: I1213 05:20:42.851823 2416 kubelet_node_status.go:73] "Attempting to register node" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:44.294352 kubelet[2416]: E1213 05:20:44.294284 2416 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-ch81y.gb1.brightbox.com\" not found" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:44.406320 kubelet[2416]: I1213 05:20:44.405994 2416 kubelet_node_status.go:76] "Successfully registered node" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:44.698060 kubelet[2416]: I1213 05:20:44.697968 2416 apiserver.go:52] "Watching apiserver" Dec 13 05:20:44.726944 kubelet[2416]: I1213 05:20:44.726838 2416 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 05:20:46.369478 kubelet[2416]: W1213 05:20:46.368623 2416 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:20:46.526922 systemd[1]: Reloading requested from client PID 2696 ('systemctl') (unit session-11.scope)... Dec 13 05:20:46.526956 systemd[1]: Reloading... Dec 13 05:20:46.642213 zram_generator::config[2732]: No configuration found. Dec 13 05:20:46.850472 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:20:46.978719 systemd[1]: Reloading finished in 450 ms. Dec 13 05:20:47.040981 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:20:47.041675 kubelet[2416]: E1213 05:20:47.040808 2416 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{srv-ch81y.gb1.brightbox.com.1810a4f78f2dacf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-ch81y.gb1.brightbox.com,UID:srv-ch81y.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-ch81y.gb1.brightbox.com,},FirstTimestamp:2024-12-13 05:20:39.703653622 +0000 UTC m=+0.620946330,LastTimestamp:2024-12-13 05:20:39.703653622 +0000 UTC m=+0.620946330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-ch81y.gb1.brightbox.com,}" Dec 13 05:20:47.041675 kubelet[2416]: I1213 05:20:47.041299 2416 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 05:20:47.055121 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 05:20:47.055791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:20:47.055896 systemd[1]: kubelet.service: Consumed 1.162s CPU time, 114.2M memory peak, 0B memory swap peak. Dec 13 05:20:47.062658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:20:47.264310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:20:47.278250 (kubelet)[2799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 05:20:47.363055 kubelet[2799]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:20:47.363055 kubelet[2799]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 05:20:47.363055 kubelet[2799]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:20:47.363055 kubelet[2799]: I1213 05:20:47.362887 2799 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 05:20:47.377390 kubelet[2799]: I1213 05:20:47.377307 2799 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 05:20:47.377390 kubelet[2799]: I1213 05:20:47.377351 2799 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 05:20:47.377728 kubelet[2799]: I1213 05:20:47.377643 2799 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 05:20:47.381831 kubelet[2799]: I1213 05:20:47.381780 2799 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 05:20:47.385226 kubelet[2799]: I1213 05:20:47.384688 2799 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 05:20:47.398816 kubelet[2799]: I1213 05:20:47.397527 2799 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 05:20:47.398816 kubelet[2799]: I1213 05:20:47.397946 2799 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 05:20:47.398816 kubelet[2799]: I1213 05:20:47.398025 2799 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-ch81y.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 05:20:47.398816 kubelet[2799]: I1213 05:20:47.398316 2799 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 05:20:47.399542 kubelet[2799]: I1213 05:20:47.398333 2799 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 05:20:47.399542 kubelet[2799]: I1213 05:20:47.398392 2799 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:20:47.399542 kubelet[2799]: I1213 05:20:47.398555 2799 kubelet.go:400] "Attempting to sync node with API server" Dec 13 05:20:47.399542 kubelet[2799]: I1213 05:20:47.398588 2799 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 05:20:47.399542 kubelet[2799]: I1213 05:20:47.398627 2799 kubelet.go:312] "Adding apiserver pod source" Dec 13 05:20:47.399542 kubelet[2799]: I1213 05:20:47.398653 2799 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 05:20:47.404499 kubelet[2799]: I1213 05:20:47.404475 2799 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 05:20:47.404820 kubelet[2799]: I1213 05:20:47.404798 2799 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 05:20:47.405538 kubelet[2799]: I1213 05:20:47.405517 2799 server.go:1264] "Started kubelet" Dec 13 05:20:47.406373 kubelet[2799]: I1213 05:20:47.406312 2799 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 05:20:47.406688 kubelet[2799]: I1213 05:20:47.406635 2799 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 05:20:47.407215 kubelet[2799]: I1213 05:20:47.407190 2799 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 05:20:47.410615 kubelet[2799]: I1213 05:20:47.410563 2799 server.go:455] "Adding debug handlers to kubelet server" Dec 13 05:20:47.418522 kubelet[2799]: I1213 05:20:47.418489 2799 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 05:20:47.429124 kubelet[2799]: I1213 05:20:47.429079 2799 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 05:20:47.429728 kubelet[2799]: I1213 05:20:47.429612 2799 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 05:20:47.430779 kubelet[2799]: I1213 05:20:47.430752 2799 reconciler.go:26] "Reconciler: start to sync state" Dec 13 05:20:47.442666 kubelet[2799]: I1213 05:20:47.442626 2799 factory.go:221] Registration of the containerd container factory successfully Dec 13 05:20:47.442666 kubelet[2799]: I1213 05:20:47.442658 2799 factory.go:221] Registration of the systemd container factory successfully Dec 13 05:20:47.442896 kubelet[2799]: I1213 05:20:47.442770 2799 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 05:20:47.443258 kubelet[2799]: I1213 05:20:47.443085 2799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 05:20:47.445615 kubelet[2799]: I1213 05:20:47.444804 2799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 05:20:47.445615 kubelet[2799]: I1213 05:20:47.444852 2799 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 05:20:47.445615 kubelet[2799]: I1213 05:20:47.444883 2799 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 05:20:47.445615 kubelet[2799]: E1213 05:20:47.444957 2799 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 05:20:47.522002 kubelet[2799]: I1213 05:20:47.521873 2799 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 05:20:47.522611 kubelet[2799]: I1213 05:20:47.522265 2799 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 05:20:47.522611 kubelet[2799]: I1213 05:20:47.522305 2799 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:20:47.524138 kubelet[2799]: I1213 05:20:47.524039 2799 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 05:20:47.524138 kubelet[2799]: I1213 05:20:47.524065 2799 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 05:20:47.524138 kubelet[2799]: I1213 05:20:47.524112 2799 policy_none.go:49] "None policy: Start" Dec 13 05:20:47.527142 kubelet[2799]: I1213 05:20:47.526263 2799 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 05:20:47.527142 kubelet[2799]: I1213 05:20:47.526299 2799 state_mem.go:35] "Initializing new in-memory state store" Dec 13 05:20:47.527142 kubelet[2799]: I1213 05:20:47.526527 2799 state_mem.go:75] "Updated machine memory state" Dec 13 05:20:47.541959 kubelet[2799]: I1213 05:20:47.541823 2799 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 05:20:47.543392 kubelet[2799]: I1213 05:20:47.543337 2799 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 05:20:47.544475 kubelet[2799]: I1213 05:20:47.544454 2799 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 05:20:47.546583 kubelet[2799]: I1213 05:20:47.545227 2799 topology_manager.go:215] "Topology Admit Handler" podUID="bb72411f83caa2db9a0da4ebb56bae07" podNamespace="kube-system" podName="kube-apiserver-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.547483 kubelet[2799]: I1213 05:20:47.547454 2799 kubelet_node_status.go:73] "Attempting to register node" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.550245 sudo[2828]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 05:20:47.550785 sudo[2828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 05:20:47.554763 kubelet[2799]: I1213 05:20:47.554608 2799 topology_manager.go:215] "Topology Admit Handler" podUID="fb084671e490a455b4043376a2aad30c" podNamespace="kube-system" podName="kube-controller-manager-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.555030 kubelet[2799]: I1213 05:20:47.554897 2799 topology_manager.go:215] "Topology Admit Handler" podUID="a214ab7c918b6b67957afb0ee10ff16b" podNamespace="kube-system" podName="kube-scheduler-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.578434 kubelet[2799]: W1213 05:20:47.577098 2799 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:20:47.578434 kubelet[2799]: W1213 05:20:47.577245 2799 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:20:47.582389 kubelet[2799]: W1213 05:20:47.581813 2799 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:20:47.582957 kubelet[2799]: E1213 05:20:47.582881 2799 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-ch81y.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.583332 kubelet[2799]: I1213 05:20:47.582685 2799 kubelet_node_status.go:112] "Node was previously registered" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.584356 kubelet[2799]: I1213 05:20:47.584264 2799 kubelet_node_status.go:76] "Successfully registered node" node="srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.633163 kubelet[2799]: I1213 05:20:47.633048 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb084671e490a455b4043376a2aad30c-ca-certs\") pod \"kube-controller-manager-srv-ch81y.gb1.brightbox.com\" (UID: \"fb084671e490a455b4043376a2aad30c\") " pod="kube-system/kube-controller-manager-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.633163 kubelet[2799]: I1213 05:20:47.633130 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fb084671e490a455b4043376a2aad30c-flexvolume-dir\") pod \"kube-controller-manager-srv-ch81y.gb1.brightbox.com\" (UID: \"fb084671e490a455b4043376a2aad30c\") " pod="kube-system/kube-controller-manager-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.633163 kubelet[2799]: I1213 05:20:47.633164 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb084671e490a455b4043376a2aad30c-kubeconfig\") pod \"kube-controller-manager-srv-ch81y.gb1.brightbox.com\" (UID: \"fb084671e490a455b4043376a2aad30c\") " pod="kube-system/kube-controller-manager-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.633742 kubelet[2799]: I1213 05:20:47.633195 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a214ab7c918b6b67957afb0ee10ff16b-kubeconfig\") pod \"kube-scheduler-srv-ch81y.gb1.brightbox.com\" (UID: \"a214ab7c918b6b67957afb0ee10ff16b\") " pod="kube-system/kube-scheduler-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.633742 kubelet[2799]: I1213 05:20:47.633223 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb72411f83caa2db9a0da4ebb56bae07-ca-certs\") pod \"kube-apiserver-srv-ch81y.gb1.brightbox.com\" (UID: \"bb72411f83caa2db9a0da4ebb56bae07\") " pod="kube-system/kube-apiserver-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.633742 kubelet[2799]: I1213 05:20:47.633275 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb72411f83caa2db9a0da4ebb56bae07-k8s-certs\") pod \"kube-apiserver-srv-ch81y.gb1.brightbox.com\" (UID: \"bb72411f83caa2db9a0da4ebb56bae07\") " pod="kube-system/kube-apiserver-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.633742 kubelet[2799]: I1213 05:20:47.633308 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb72411f83caa2db9a0da4ebb56bae07-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ch81y.gb1.brightbox.com\" (UID: \"bb72411f83caa2db9a0da4ebb56bae07\") " pod="kube-system/kube-apiserver-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.633742 kubelet[2799]: I1213 05:20:47.633368 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb084671e490a455b4043376a2aad30c-k8s-certs\") pod \"kube-controller-manager-srv-ch81y.gb1.brightbox.com\" (UID: \"fb084671e490a455b4043376a2aad30c\") " pod="kube-system/kube-controller-manager-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:47.634029 kubelet[2799]: I1213 05:20:47.633431 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb084671e490a455b4043376a2aad30c-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ch81y.gb1.brightbox.com\" (UID: \"fb084671e490a455b4043376a2aad30c\") " pod="kube-system/kube-controller-manager-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:48.365582 sudo[2828]: pam_unix(sudo:session): session closed for user root Dec 13 05:20:48.399877 kubelet[2799]: I1213 05:20:48.399801 2799 apiserver.go:52] "Watching apiserver" Dec 13 05:20:48.430284 kubelet[2799]: I1213 05:20:48.430186 2799 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 05:20:48.499846 kubelet[2799]: W1213 05:20:48.499155 2799 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:20:48.499846 kubelet[2799]: E1213 05:20:48.499262 2799 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-ch81y.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:48.507180 kubelet[2799]: W1213 05:20:48.507135 2799 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:20:48.509305 kubelet[2799]: E1213 05:20:48.507646 2799 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-ch81y.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-ch81y.gb1.brightbox.com" Dec 13 05:20:48.519935 kubelet[2799]: I1213 05:20:48.517729 2799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-ch81y.gb1.brightbox.com" podStartSLOduration=1.5176945480000001 podStartE2EDuration="1.517694548s" podCreationTimestamp="2024-12-13 05:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:20:48.517335712 +0000 UTC m=+1.232526628" watchObservedRunningTime="2024-12-13 05:20:48.517694548 +0000 UTC m=+1.232885454" Dec 13 05:20:48.552424 kubelet[2799]: I1213 05:20:48.552082 2799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-ch81y.gb1.brightbox.com" podStartSLOduration=2.552054638 podStartE2EDuration="2.552054638s" podCreationTimestamp="2024-12-13 05:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:20:48.551977846 +0000 UTC m=+1.267168772" watchObservedRunningTime="2024-12-13 05:20:48.552054638 +0000 UTC m=+1.267245533" Dec 13 05:20:48.552961 kubelet[2799]: I1213 05:20:48.552594 2799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-ch81y.gb1.brightbox.com" podStartSLOduration=1.552385503 podStartE2EDuration="1.552385503s" podCreationTimestamp="2024-12-13 05:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:20:48.53383936 +0000 UTC m=+1.249030269" watchObservedRunningTime="2024-12-13 05:20:48.552385503 +0000 UTC m=+1.267576431" Dec 13 05:20:50.811503 sudo[1825]: pam_unix(sudo:session): session closed for user root Dec 13 05:20:50.956026 sshd[1812]: pam_unix(sshd:session): session closed for user core Dec 13 05:20:50.962055 systemd[1]: sshd@16-10.244.19.70:22-147.75.109.163:57304.service: Deactivated successfully. Dec 13 05:20:50.965479 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 05:20:50.966013 systemd[1]: session-11.scope: Consumed 6.949s CPU time, 187.4M memory peak, 0B memory swap peak. Dec 13 05:20:50.969848 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Dec 13 05:20:50.973381 systemd-logind[1485]: Removed session 11. Dec 13 05:21:01.301478 kubelet[2799]: I1213 05:21:01.301191 2799 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 05:21:01.302526 kubelet[2799]: I1213 05:21:01.302390 2799 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 05:21:01.302594 containerd[1505]: time="2024-12-13T05:21:01.301886480Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 05:21:02.170740 kubelet[2799]: I1213 05:21:02.169941 2799 topology_manager.go:215] "Topology Admit Handler" podUID="362fac1a-51f7-4ad7-960d-ffa87c711b9d" podNamespace="kube-system" podName="kube-proxy-b5ldh" Dec 13 05:21:02.182871 kubelet[2799]: W1213 05:21:02.182813 2799 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-ch81y.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ch81y.gb1.brightbox.com' and this object Dec 13 05:21:02.183056 kubelet[2799]: E1213 05:21:02.182890 2799 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-ch81y.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ch81y.gb1.brightbox.com' and this object Dec 13 05:21:02.183334 kubelet[2799]: W1213 05:21:02.183307 2799 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-ch81y.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ch81y.gb1.brightbox.com' and this object Dec 13 05:21:02.183430 kubelet[2799]: E1213 05:21:02.183341 2799 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-ch81y.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ch81y.gb1.brightbox.com' and this object Dec 13 05:21:02.189054 systemd[1]: Created slice kubepods-besteffort-pod362fac1a_51f7_4ad7_960d_ffa87c711b9d.slice - libcontainer container kubepods-besteffort-pod362fac1a_51f7_4ad7_960d_ffa87c711b9d.slice. Dec 13 05:21:02.216132 kubelet[2799]: I1213 05:21:02.215398 2799 topology_manager.go:215] "Topology Admit Handler" podUID="6276f2e1-98e8-4d0a-912c-d96a7e4a7546" podNamespace="kube-system" podName="cilium-2wcz5" Dec 13 05:21:02.228587 systemd[1]: Created slice kubepods-burstable-pod6276f2e1_98e8_4d0a_912c_d96a7e4a7546.slice - libcontainer container kubepods-burstable-pod6276f2e1_98e8_4d0a_912c_d96a7e4a7546.slice. Dec 13 05:21:02.318469 kubelet[2799]: I1213 05:21:02.318240 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cilium-config-path\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.318469 kubelet[2799]: I1213 05:21:02.318476 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-hostproc\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.319241 kubelet[2799]: I1213 05:21:02.318511 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-xtables-lock\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.319241 kubelet[2799]: I1213 05:21:02.318559 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-clustermesh-secrets\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.319241 kubelet[2799]: I1213 05:21:02.318589 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-bpf-maps\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.319241 kubelet[2799]: I1213 05:21:02.318635 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/362fac1a-51f7-4ad7-960d-ffa87c711b9d-kube-proxy\") pod \"kube-proxy-b5ldh\" (UID: \"362fac1a-51f7-4ad7-960d-ffa87c711b9d\") " pod="kube-system/kube-proxy-b5ldh" Dec 13 05:21:02.319241 kubelet[2799]: I1213 05:21:02.318746 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/362fac1a-51f7-4ad7-960d-ffa87c711b9d-lib-modules\") pod \"kube-proxy-b5ldh\" (UID: \"362fac1a-51f7-4ad7-960d-ffa87c711b9d\") " pod="kube-system/kube-proxy-b5ldh" Dec 13 05:21:02.319241 kubelet[2799]: I1213 05:21:02.318802 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh5tl\" (UniqueName: \"kubernetes.io/projected/362fac1a-51f7-4ad7-960d-ffa87c711b9d-kube-api-access-xh5tl\") pod \"kube-proxy-b5ldh\" (UID: \"362fac1a-51f7-4ad7-960d-ffa87c711b9d\") " pod="kube-system/kube-proxy-b5ldh" Dec 13 05:21:02.319519 kubelet[2799]: I1213 05:21:02.318838 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cilium-run\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.319519 kubelet[2799]: I1213 05:21:02.318887 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjbn9\" (UniqueName: \"kubernetes.io/projected/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-kube-api-access-rjbn9\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.319519 kubelet[2799]: I1213 05:21:02.318919 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-hubble-tls\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.319519 kubelet[2799]: I1213 05:21:02.318968 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/362fac1a-51f7-4ad7-960d-ffa87c711b9d-xtables-lock\") pod \"kube-proxy-b5ldh\" (UID: \"362fac1a-51f7-4ad7-960d-ffa87c711b9d\") " pod="kube-system/kube-proxy-b5ldh" Dec 13 05:21:02.319519 kubelet[2799]: I1213 05:21:02.318996 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cni-path\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.319519 kubelet[2799]: I1213 05:21:02.319071 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cilium-cgroup\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.319785 kubelet[2799]: I1213 05:21:02.319159 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-etc-cni-netd\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.319785 kubelet[2799]: I1213 05:21:02.319191 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-lib-modules\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.319785 kubelet[2799]: I1213 05:21:02.319239 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-host-proc-sys-net\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.319785 kubelet[2799]: I1213 05:21:02.319280 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-host-proc-sys-kernel\") pod \"cilium-2wcz5\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " pod="kube-system/cilium-2wcz5" Dec 13 05:21:02.395915 kubelet[2799]: I1213 05:21:02.393603 2799 topology_manager.go:215] "Topology Admit Handler" podUID="bea47212-2600-4db7-953e-9eb3203f49f6" podNamespace="kube-system" podName="cilium-operator-599987898-7rdp5" Dec 13 05:21:02.408463 systemd[1]: Created slice kubepods-besteffort-podbea47212_2600_4db7_953e_9eb3203f49f6.slice - libcontainer container kubepods-besteffort-podbea47212_2600_4db7_953e_9eb3203f49f6.slice. Dec 13 05:21:02.521352 kubelet[2799]: I1213 05:21:02.521180 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfndz\" (UniqueName: \"kubernetes.io/projected/bea47212-2600-4db7-953e-9eb3203f49f6-kube-api-access-tfndz\") pod \"cilium-operator-599987898-7rdp5\" (UID: \"bea47212-2600-4db7-953e-9eb3203f49f6\") " pod="kube-system/cilium-operator-599987898-7rdp5" Dec 13 05:21:02.521352 kubelet[2799]: I1213 05:21:02.521267 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bea47212-2600-4db7-953e-9eb3203f49f6-cilium-config-path\") pod \"cilium-operator-599987898-7rdp5\" (UID: \"bea47212-2600-4db7-953e-9eb3203f49f6\") " pod="kube-system/cilium-operator-599987898-7rdp5" Dec 13 05:21:03.314001 containerd[1505]: time="2024-12-13T05:21:03.313922847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7rdp5,Uid:bea47212-2600-4db7-953e-9eb3203f49f6,Namespace:kube-system,Attempt:0,}" Dec 13 05:21:03.351135 containerd[1505]: time="2024-12-13T05:21:03.350653855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:21:03.351135 containerd[1505]: time="2024-12-13T05:21:03.350745823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:21:03.351135 containerd[1505]: time="2024-12-13T05:21:03.350771528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:21:03.351135 containerd[1505]: time="2024-12-13T05:21:03.350903563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:21:03.380359 systemd[1]: Started cri-containerd-5832df70b035b9e3dd9bc6360a568ebb2ec3ca4fa637ee911deb5ade9085e2df.scope - libcontainer container 5832df70b035b9e3dd9bc6360a568ebb2ec3ca4fa637ee911deb5ade9085e2df. Dec 13 05:21:03.428884 kubelet[2799]: E1213 05:21:03.427975 2799 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 13 05:21:03.428884 kubelet[2799]: E1213 05:21:03.428137 2799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/362fac1a-51f7-4ad7-960d-ffa87c711b9d-kube-proxy podName:362fac1a-51f7-4ad7-960d-ffa87c711b9d nodeName:}" failed. No retries permitted until 2024-12-13 05:21:03.928073624 +0000 UTC m=+16.643264525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/362fac1a-51f7-4ad7-960d-ffa87c711b9d-kube-proxy") pod "kube-proxy-b5ldh" (UID: "362fac1a-51f7-4ad7-960d-ffa87c711b9d") : failed to sync configmap cache: timed out waiting for the condition Dec 13 05:21:03.441548 containerd[1505]: time="2024-12-13T05:21:03.440687317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wcz5,Uid:6276f2e1-98e8-4d0a-912c-d96a7e4a7546,Namespace:kube-system,Attempt:0,}" Dec 13 05:21:03.471339 containerd[1505]: time="2024-12-13T05:21:03.469536435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7rdp5,Uid:bea47212-2600-4db7-953e-9eb3203f49f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5832df70b035b9e3dd9bc6360a568ebb2ec3ca4fa637ee911deb5ade9085e2df\"" Dec 13 05:21:03.478565 containerd[1505]: time="2024-12-13T05:21:03.478363486Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 05:21:03.492642 containerd[1505]: time="2024-12-13T05:21:03.492417759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:21:03.492642 containerd[1505]: time="2024-12-13T05:21:03.492503089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:21:03.492642 containerd[1505]: time="2024-12-13T05:21:03.492541043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:21:03.493004 containerd[1505]: time="2024-12-13T05:21:03.492686776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:21:03.524311 systemd[1]: Started cri-containerd-c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17.scope - libcontainer container c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17. Dec 13 05:21:03.570557 containerd[1505]: time="2024-12-13T05:21:03.569661739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wcz5,Uid:6276f2e1-98e8-4d0a-912c-d96a7e4a7546,Namespace:kube-system,Attempt:0,} returns sandbox id \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\"" Dec 13 05:21:04.002277 containerd[1505]: time="2024-12-13T05:21:04.001963827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5ldh,Uid:362fac1a-51f7-4ad7-960d-ffa87c711b9d,Namespace:kube-system,Attempt:0,}" Dec 13 05:21:04.053778 containerd[1505]: time="2024-12-13T05:21:04.041083598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:21:04.053778 containerd[1505]: time="2024-12-13T05:21:04.042351231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:21:04.053778 containerd[1505]: time="2024-12-13T05:21:04.042373642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:21:04.053778 containerd[1505]: time="2024-12-13T05:21:04.042517413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:21:04.087489 systemd[1]: Started cri-containerd-5146f1022eb3cfa6b8f6763de7b8c01b86bad54d398a46ed9c3e7e85ffb711bc.scope - libcontainer container 5146f1022eb3cfa6b8f6763de7b8c01b86bad54d398a46ed9c3e7e85ffb711bc. Dec 13 05:21:04.129042 containerd[1505]: time="2024-12-13T05:21:04.128415066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5ldh,Uid:362fac1a-51f7-4ad7-960d-ffa87c711b9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5146f1022eb3cfa6b8f6763de7b8c01b86bad54d398a46ed9c3e7e85ffb711bc\"" Dec 13 05:21:04.136181 containerd[1505]: time="2024-12-13T05:21:04.136128575Z" level=info msg="CreateContainer within sandbox \"5146f1022eb3cfa6b8f6763de7b8c01b86bad54d398a46ed9c3e7e85ffb711bc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 05:21:04.173450 containerd[1505]: time="2024-12-13T05:21:04.173252638Z" level=info msg="CreateContainer within sandbox \"5146f1022eb3cfa6b8f6763de7b8c01b86bad54d398a46ed9c3e7e85ffb711bc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c71b105fe6a78364fe6ec52c9ba43a0aa1b113d05e5a3314a1d4c5b98eb395be\"" Dec 13 05:21:04.176202 containerd[1505]: time="2024-12-13T05:21:04.174748410Z" level=info msg="StartContainer for \"c71b105fe6a78364fe6ec52c9ba43a0aa1b113d05e5a3314a1d4c5b98eb395be\"" Dec 13 05:21:04.219514 systemd[1]: Started cri-containerd-c71b105fe6a78364fe6ec52c9ba43a0aa1b113d05e5a3314a1d4c5b98eb395be.scope - libcontainer container c71b105fe6a78364fe6ec52c9ba43a0aa1b113d05e5a3314a1d4c5b98eb395be. Dec 13 05:21:04.264344 containerd[1505]: time="2024-12-13T05:21:04.264139448Z" level=info msg="StartContainer for \"c71b105fe6a78364fe6ec52c9ba43a0aa1b113d05e5a3314a1d4c5b98eb395be\" returns successfully" Dec 13 05:21:06.561242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2853141121.mount: Deactivated successfully. Dec 13 05:21:07.435044 containerd[1505]: time="2024-12-13T05:21:07.434969918Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:21:07.437345 containerd[1505]: time="2024-12-13T05:21:07.437278427Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907177" Dec 13 05:21:07.438956 containerd[1505]: time="2024-12-13T05:21:07.438898173Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:21:07.442877 containerd[1505]: time="2024-12-13T05:21:07.442744492Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.963564373s" Dec 13 05:21:07.442877 containerd[1505]: time="2024-12-13T05:21:07.442817823Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 05:21:07.458810 containerd[1505]: time="2024-12-13T05:21:07.457986808Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 05:21:07.460975 containerd[1505]: time="2024-12-13T05:21:07.460746227Z" level=info msg="CreateContainer within sandbox \"5832df70b035b9e3dd9bc6360a568ebb2ec3ca4fa637ee911deb5ade9085e2df\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 05:21:07.487683 kubelet[2799]: I1213 05:21:07.486129 2799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b5ldh" podStartSLOduration=5.486063972 podStartE2EDuration="5.486063972s" podCreationTimestamp="2024-12-13 05:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:21:04.603155536 +0000 UTC m=+17.318346452" watchObservedRunningTime="2024-12-13 05:21:07.486063972 +0000 UTC m=+20.201254879" Dec 13 05:21:07.499396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1632152189.mount: Deactivated successfully. Dec 13 05:21:07.505807 containerd[1505]: time="2024-12-13T05:21:07.505558181Z" level=info msg="CreateContainer within sandbox \"5832df70b035b9e3dd9bc6360a568ebb2ec3ca4fa637ee911deb5ade9085e2df\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\"" Dec 13 05:21:07.506743 containerd[1505]: time="2024-12-13T05:21:07.506635718Z" level=info msg="StartContainer for \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\"" Dec 13 05:21:07.571179 systemd[1]: run-containerd-runc-k8s.io-208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15-runc.sGdQ9L.mount: Deactivated successfully. Dec 13 05:21:07.589400 systemd[1]: Started cri-containerd-208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15.scope - libcontainer container 208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15. Dec 13 05:21:07.642799 containerd[1505]: time="2024-12-13T05:21:07.642476817Z" level=info msg="StartContainer for \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\" returns successfully" Dec 13 05:21:15.123714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4113288576.mount: Deactivated successfully. Dec 13 05:21:18.748848 containerd[1505]: time="2024-12-13T05:21:18.748748310Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:21:18.752036 containerd[1505]: time="2024-12-13T05:21:18.751956496Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735283" Dec 13 05:21:18.753352 containerd[1505]: time="2024-12-13T05:21:18.753199082Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:21:18.757474 containerd[1505]: time="2024-12-13T05:21:18.757230758Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.296358283s" Dec 13 05:21:18.757474 containerd[1505]: time="2024-12-13T05:21:18.757281061Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 05:21:18.763387 containerd[1505]: time="2024-12-13T05:21:18.763290498Z" level=info msg="CreateContainer within sandbox \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 05:21:18.862408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4182484445.mount: Deactivated successfully. Dec 13 05:21:18.865217 containerd[1505]: time="2024-12-13T05:21:18.865157544Z" level=info msg="CreateContainer within sandbox \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547\"" Dec 13 05:21:18.866300 containerd[1505]: time="2024-12-13T05:21:18.866093349Z" level=info msg="StartContainer for \"791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547\"" Dec 13 05:21:18.991338 systemd[1]: Started cri-containerd-791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547.scope - libcontainer container 791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547. Dec 13 05:21:19.042278 containerd[1505]: time="2024-12-13T05:21:19.041557643Z" level=info msg="StartContainer for \"791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547\" returns successfully" Dec 13 05:21:19.059530 systemd[1]: cri-containerd-791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547.scope: Deactivated successfully. Dec 13 05:21:19.276077 containerd[1505]: time="2024-12-13T05:21:19.265847398Z" level=info msg="shim disconnected" id=791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547 namespace=k8s.io Dec 13 05:21:19.276077 containerd[1505]: time="2024-12-13T05:21:19.275875956Z" level=warning msg="cleaning up after shim disconnected" id=791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547 namespace=k8s.io Dec 13 05:21:19.276077 containerd[1505]: time="2024-12-13T05:21:19.275917999Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:21:19.681767 containerd[1505]: time="2024-12-13T05:21:19.681693071Z" level=info msg="CreateContainer within sandbox \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 05:21:19.715997 containerd[1505]: time="2024-12-13T05:21:19.715347803Z" level=info msg="CreateContainer within sandbox \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad\"" Dec 13 05:21:19.720416 containerd[1505]: time="2024-12-13T05:21:19.716460965Z" level=info msg="StartContainer for \"71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad\"" Dec 13 05:21:19.721032 kubelet[2799]: I1213 05:21:19.717423 2799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-7rdp5" podStartSLOduration=13.738770725 podStartE2EDuration="17.717385865s" podCreationTimestamp="2024-12-13 05:21:02 +0000 UTC" firstStartedPulling="2024-12-13 05:21:03.474007815 +0000 UTC m=+16.189198712" lastFinishedPulling="2024-12-13 05:21:07.452622944 +0000 UTC m=+20.167813852" observedRunningTime="2024-12-13 05:21:08.671317953 +0000 UTC m=+21.386508861" watchObservedRunningTime="2024-12-13 05:21:19.717385865 +0000 UTC m=+32.432576775" Dec 13 05:21:19.776432 systemd[1]: Started cri-containerd-71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad.scope - libcontainer container 71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad. Dec 13 05:21:19.825955 containerd[1505]: time="2024-12-13T05:21:19.825866947Z" level=info msg="StartContainer for \"71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad\" returns successfully" Dec 13 05:21:19.856627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547-rootfs.mount: Deactivated successfully. Dec 13 05:21:19.863277 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 05:21:19.863756 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:21:19.863958 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 05:21:19.873719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 05:21:19.876400 systemd[1]: cri-containerd-71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad.scope: Deactivated successfully. Dec 13 05:21:19.929071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad-rootfs.mount: Deactivated successfully. Dec 13 05:21:19.934382 containerd[1505]: time="2024-12-13T05:21:19.933298101Z" level=info msg="shim disconnected" id=71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad namespace=k8s.io Dec 13 05:21:19.934382 containerd[1505]: time="2024-12-13T05:21:19.934120415Z" level=warning msg="cleaning up after shim disconnected" id=71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad namespace=k8s.io Dec 13 05:21:19.934382 containerd[1505]: time="2024-12-13T05:21:19.934147436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:21:19.960280 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:21:20.691806 containerd[1505]: time="2024-12-13T05:21:20.691724138Z" level=info msg="CreateContainer within sandbox \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 05:21:20.749231 containerd[1505]: time="2024-12-13T05:21:20.748867017Z" level=info msg="CreateContainer within sandbox \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581\"" Dec 13 05:21:20.750622 containerd[1505]: time="2024-12-13T05:21:20.750118976Z" level=info msg="StartContainer for \"3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581\"" Dec 13 05:21:20.805394 systemd[1]: Started cri-containerd-3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581.scope - libcontainer container 3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581. Dec 13 05:21:20.859586 containerd[1505]: time="2024-12-13T05:21:20.859517384Z" level=info msg="StartContainer for \"3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581\" returns successfully" Dec 13 05:21:20.870939 systemd[1]: cri-containerd-3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581.scope: Deactivated successfully. Dec 13 05:21:20.906145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581-rootfs.mount: Deactivated successfully. Dec 13 05:21:20.909130 containerd[1505]: time="2024-12-13T05:21:20.909009586Z" level=info msg="shim disconnected" id=3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581 namespace=k8s.io Dec 13 05:21:20.909700 containerd[1505]: time="2024-12-13T05:21:20.909447340Z" level=warning msg="cleaning up after shim disconnected" id=3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581 namespace=k8s.io Dec 13 05:21:20.909700 containerd[1505]: time="2024-12-13T05:21:20.909475556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:21:20.940089 containerd[1505]: time="2024-12-13T05:21:20.939987486Z" level=warning msg="cleanup warnings time=\"2024-12-13T05:21:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 05:21:21.693986 containerd[1505]: time="2024-12-13T05:21:21.693557468Z" level=info msg="CreateContainer within sandbox \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 05:21:21.715077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603837481.mount: Deactivated successfully. Dec 13 05:21:21.718365 containerd[1505]: time="2024-12-13T05:21:21.718291638Z" level=info msg="CreateContainer within sandbox \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931\"" Dec 13 05:21:21.720446 containerd[1505]: time="2024-12-13T05:21:21.720410547Z" level=info msg="StartContainer for \"3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931\"" Dec 13 05:21:21.768466 systemd[1]: Started cri-containerd-3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931.scope - libcontainer container 3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931. Dec 13 05:21:21.818539 systemd[1]: cri-containerd-3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931.scope: Deactivated successfully. Dec 13 05:21:21.822011 containerd[1505]: time="2024-12-13T05:21:21.821957647Z" level=info msg="StartContainer for \"3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931\" returns successfully" Dec 13 05:21:21.864899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931-rootfs.mount: Deactivated successfully. Dec 13 05:21:21.869785 containerd[1505]: time="2024-12-13T05:21:21.869649240Z" level=info msg="shim disconnected" id=3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931 namespace=k8s.io Dec 13 05:21:21.870684 containerd[1505]: time="2024-12-13T05:21:21.869898848Z" level=warning msg="cleaning up after shim disconnected" id=3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931 namespace=k8s.io Dec 13 05:21:21.870684 containerd[1505]: time="2024-12-13T05:21:21.869920968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:21:22.697379 containerd[1505]: time="2024-12-13T05:21:22.697071384Z" level=info msg="CreateContainer within sandbox \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 05:21:22.745069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457689902.mount: Deactivated successfully. Dec 13 05:21:22.753192 containerd[1505]: time="2024-12-13T05:21:22.752923863Z" level=info msg="CreateContainer within sandbox \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\"" Dec 13 05:21:22.754760 containerd[1505]: time="2024-12-13T05:21:22.754589280Z" level=info msg="StartContainer for \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\"" Dec 13 05:21:22.797402 systemd[1]: Started cri-containerd-313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523.scope - libcontainer container 313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523. Dec 13 05:21:22.843052 containerd[1505]: time="2024-12-13T05:21:22.842970466Z" level=info msg="StartContainer for \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\" returns successfully" Dec 13 05:21:23.217563 kubelet[2799]: I1213 05:21:23.217513 2799 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 05:21:23.301385 kubelet[2799]: I1213 05:21:23.301020 2799 topology_manager.go:215] "Topology Admit Handler" podUID="645b7802-c50e-460b-95d6-2b47bfa279b1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-72xt4" Dec 13 05:21:23.317866 kubelet[2799]: I1213 05:21:23.315942 2799 topology_manager.go:215] "Topology Admit Handler" podUID="32ec62b7-f85f-41ff-af00-719b8411eadf" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7kslh" Dec 13 05:21:23.329728 systemd[1]: Created slice kubepods-burstable-pod645b7802_c50e_460b_95d6_2b47bfa279b1.slice - libcontainer container kubepods-burstable-pod645b7802_c50e_460b_95d6_2b47bfa279b1.slice. Dec 13 05:21:23.336783 kubelet[2799]: W1213 05:21:23.336262 2799 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-ch81y.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ch81y.gb1.brightbox.com' and this object Dec 13 05:21:23.337968 kubelet[2799]: E1213 05:21:23.337012 2799 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-ch81y.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ch81y.gb1.brightbox.com' and this object Dec 13 05:21:23.342403 systemd[1]: Created slice kubepods-burstable-pod32ec62b7_f85f_41ff_af00_719b8411eadf.slice - libcontainer container kubepods-burstable-pod32ec62b7_f85f_41ff_af00_719b8411eadf.slice. Dec 13 05:21:23.396396 kubelet[2799]: I1213 05:21:23.396292 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32ec62b7-f85f-41ff-af00-719b8411eadf-config-volume\") pod \"coredns-7db6d8ff4d-7kslh\" (UID: \"32ec62b7-f85f-41ff-af00-719b8411eadf\") " pod="kube-system/coredns-7db6d8ff4d-7kslh" Dec 13 05:21:23.397290 kubelet[2799]: I1213 05:21:23.396747 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f79q\" (UniqueName: \"kubernetes.io/projected/32ec62b7-f85f-41ff-af00-719b8411eadf-kube-api-access-6f79q\") pod \"coredns-7db6d8ff4d-7kslh\" (UID: \"32ec62b7-f85f-41ff-af00-719b8411eadf\") " pod="kube-system/coredns-7db6d8ff4d-7kslh" Dec 13 05:21:23.397290 kubelet[2799]: I1213 05:21:23.396921 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cchst\" (UniqueName: \"kubernetes.io/projected/645b7802-c50e-460b-95d6-2b47bfa279b1-kube-api-access-cchst\") pod \"coredns-7db6d8ff4d-72xt4\" (UID: \"645b7802-c50e-460b-95d6-2b47bfa279b1\") " pod="kube-system/coredns-7db6d8ff4d-72xt4" Dec 13 05:21:23.397610 kubelet[2799]: I1213 05:21:23.397188 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/645b7802-c50e-460b-95d6-2b47bfa279b1-config-volume\") pod \"coredns-7db6d8ff4d-72xt4\" (UID: \"645b7802-c50e-460b-95d6-2b47bfa279b1\") " pod="kube-system/coredns-7db6d8ff4d-72xt4" Dec 13 05:21:23.728798 kubelet[2799]: I1213 05:21:23.728703 2799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2wcz5" podStartSLOduration=6.542796984 podStartE2EDuration="21.728678323s" podCreationTimestamp="2024-12-13 05:21:02 +0000 UTC" firstStartedPulling="2024-12-13 05:21:03.572712052 +0000 UTC m=+16.287902953" lastFinishedPulling="2024-12-13 05:21:18.758593391 +0000 UTC m=+31.473784292" observedRunningTime="2024-12-13 05:21:23.724375717 +0000 UTC m=+36.439566643" watchObservedRunningTime="2024-12-13 05:21:23.728678323 +0000 UTC m=+36.443869231" Dec 13 05:21:24.540710 containerd[1505]: time="2024-12-13T05:21:24.540624474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-72xt4,Uid:645b7802-c50e-460b-95d6-2b47bfa279b1,Namespace:kube-system,Attempt:0,}" Dec 13 05:21:24.552123 containerd[1505]: time="2024-12-13T05:21:24.552026185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7kslh,Uid:32ec62b7-f85f-41ff-af00-719b8411eadf,Namespace:kube-system,Attempt:0,}" Dec 13 05:21:25.504631 systemd-networkd[1430]: cilium_host: Link UP Dec 13 05:21:25.507414 systemd-networkd[1430]: cilium_net: Link UP Dec 13 05:21:25.511957 systemd-networkd[1430]: cilium_net: Gained carrier Dec 13 05:21:25.512343 systemd-networkd[1430]: cilium_host: Gained carrier Dec 13 05:21:25.620350 systemd-networkd[1430]: cilium_host: Gained IPv6LL Dec 13 05:21:25.701911 systemd-networkd[1430]: cilium_vxlan: Link UP Dec 13 05:21:25.702181 systemd-networkd[1430]: cilium_vxlan: Gained carrier Dec 13 05:21:25.859428 systemd-networkd[1430]: cilium_net: Gained IPv6LL Dec 13 05:21:26.273149 kernel: NET: Registered PF_ALG protocol family Dec 13 05:21:26.883441 systemd-networkd[1430]: cilium_vxlan: Gained IPv6LL Dec 13 05:21:27.450884 systemd-networkd[1430]: lxc_health: Link UP Dec 13 05:21:27.470813 systemd-networkd[1430]: lxc_health: Gained carrier Dec 13 05:21:28.146447 systemd-networkd[1430]: lxc27c7089956ab: Link UP Dec 13 05:21:28.163086 kernel: eth0: renamed from tmp310cf Dec 13 05:21:28.176334 systemd-networkd[1430]: lxc27c7089956ab: Gained carrier Dec 13 05:21:28.196849 systemd-networkd[1430]: lxcbc440b9bc488: Link UP Dec 13 05:21:28.202024 kernel: eth0: renamed from tmp0ed51 Dec 13 05:21:28.213428 systemd-networkd[1430]: lxcbc440b9bc488: Gained carrier Dec 13 05:21:28.931449 systemd-networkd[1430]: lxc_health: Gained IPv6LL Dec 13 05:21:29.891590 systemd-networkd[1430]: lxcbc440b9bc488: Gained IPv6LL Dec 13 05:21:30.021744 systemd-networkd[1430]: lxc27c7089956ab: Gained IPv6LL Dec 13 05:21:32.084170 kubelet[2799]: I1213 05:21:32.083262 2799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:21:34.242138 containerd[1505]: time="2024-12-13T05:21:34.240557229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:21:34.242138 containerd[1505]: time="2024-12-13T05:21:34.241290924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:21:34.242138 containerd[1505]: time="2024-12-13T05:21:34.241381342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:21:34.242138 containerd[1505]: time="2024-12-13T05:21:34.241558180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:21:34.322053 systemd[1]: Started cri-containerd-310cfdf9850688f30accb1f78c4764738780611e894c737a0c51ac3cf4f9546a.scope - libcontainer container 310cfdf9850688f30accb1f78c4764738780611e894c737a0c51ac3cf4f9546a. Dec 13 05:21:34.335932 containerd[1505]: time="2024-12-13T05:21:34.335377626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:21:34.335932 containerd[1505]: time="2024-12-13T05:21:34.335483015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:21:34.335932 containerd[1505]: time="2024-12-13T05:21:34.335547006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:21:34.335932 containerd[1505]: time="2024-12-13T05:21:34.335760662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:21:34.394356 systemd[1]: Started cri-containerd-0ed515cf8740417d3405e7a0db50c305c631f3d09d24e4376ce08e82c19f2939.scope - libcontainer container 0ed515cf8740417d3405e7a0db50c305c631f3d09d24e4376ce08e82c19f2939. Dec 13 05:21:34.461472 containerd[1505]: time="2024-12-13T05:21:34.461347772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-72xt4,Uid:645b7802-c50e-460b-95d6-2b47bfa279b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"310cfdf9850688f30accb1f78c4764738780611e894c737a0c51ac3cf4f9546a\"" Dec 13 05:21:34.475398 containerd[1505]: time="2024-12-13T05:21:34.475164437Z" level=info msg="CreateContainer within sandbox \"310cfdf9850688f30accb1f78c4764738780611e894c737a0c51ac3cf4f9546a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 05:21:34.519552 containerd[1505]: time="2024-12-13T05:21:34.518780117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7kslh,Uid:32ec62b7-f85f-41ff-af00-719b8411eadf,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ed515cf8740417d3405e7a0db50c305c631f3d09d24e4376ce08e82c19f2939\"" Dec 13 05:21:34.521939 containerd[1505]: time="2024-12-13T05:21:34.521055236Z" level=info msg="CreateContainer within sandbox \"310cfdf9850688f30accb1f78c4764738780611e894c737a0c51ac3cf4f9546a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"451818d276f0bc5a6a7b3acfdf783142fa1b13a0d23dbf712a3877798d0c9541\"" Dec 13 05:21:34.523995 containerd[1505]: time="2024-12-13T05:21:34.523844995Z" level=info msg="StartContainer for \"451818d276f0bc5a6a7b3acfdf783142fa1b13a0d23dbf712a3877798d0c9541\"" Dec 13 05:21:34.530171 containerd[1505]: time="2024-12-13T05:21:34.529082133Z" level=info msg="CreateContainer within sandbox \"0ed515cf8740417d3405e7a0db50c305c631f3d09d24e4376ce08e82c19f2939\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 05:21:34.578147 containerd[1505]: time="2024-12-13T05:21:34.577410448Z" level=info msg="CreateContainer within sandbox \"0ed515cf8740417d3405e7a0db50c305c631f3d09d24e4376ce08e82c19f2939\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59c4f322614d427fc0b23bfdf740e03568fd20ecad5167676e206e764882d20f\"" Dec 13 05:21:34.582134 containerd[1505]: time="2024-12-13T05:21:34.581601946Z" level=info msg="StartContainer for \"59c4f322614d427fc0b23bfdf740e03568fd20ecad5167676e206e764882d20f\"" Dec 13 05:21:34.597599 systemd[1]: Started cri-containerd-451818d276f0bc5a6a7b3acfdf783142fa1b13a0d23dbf712a3877798d0c9541.scope - libcontainer container 451818d276f0bc5a6a7b3acfdf783142fa1b13a0d23dbf712a3877798d0c9541. Dec 13 05:21:34.637369 systemd[1]: Started cri-containerd-59c4f322614d427fc0b23bfdf740e03568fd20ecad5167676e206e764882d20f.scope - libcontainer container 59c4f322614d427fc0b23bfdf740e03568fd20ecad5167676e206e764882d20f. Dec 13 05:21:34.669878 containerd[1505]: time="2024-12-13T05:21:34.669471391Z" level=info msg="StartContainer for \"451818d276f0bc5a6a7b3acfdf783142fa1b13a0d23dbf712a3877798d0c9541\" returns successfully" Dec 13 05:21:34.710837 containerd[1505]: time="2024-12-13T05:21:34.710750762Z" level=info msg="StartContainer for \"59c4f322614d427fc0b23bfdf740e03568fd20ecad5167676e206e764882d20f\" returns successfully" Dec 13 05:21:34.772019 kubelet[2799]: I1213 05:21:34.770141 2799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7kslh" podStartSLOduration=32.770068012 podStartE2EDuration="32.770068012s" podCreationTimestamp="2024-12-13 05:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:21:34.764441468 +0000 UTC m=+47.479632382" watchObservedRunningTime="2024-12-13 05:21:34.770068012 +0000 UTC m=+47.485258920" Dec 13 05:21:35.254416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3396293388.mount: Deactivated successfully. Dec 13 05:21:35.771308 kubelet[2799]: I1213 05:21:35.770404 2799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-72xt4" podStartSLOduration=33.770377401 podStartE2EDuration="33.770377401s" podCreationTimestamp="2024-12-13 05:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:21:34.795753248 +0000 UTC m=+47.510944160" watchObservedRunningTime="2024-12-13 05:21:35.770377401 +0000 UTC m=+48.485568312" Dec 13 05:21:56.912540 systemd[1]: Started sshd@22-10.244.19.70:22-147.75.109.163:56614.service - OpenSSH per-connection server daemon (147.75.109.163:56614). Dec 13 05:21:57.856276 sshd[4178]: Accepted publickey for core from 147.75.109.163 port 56614 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:21:57.861080 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:21:57.871211 systemd-logind[1485]: New session 12 of user core. Dec 13 05:21:57.879375 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 05:21:59.027804 sshd[4178]: pam_unix(sshd:session): session closed for user core Dec 13 05:21:59.034025 systemd[1]: sshd@22-10.244.19.70:22-147.75.109.163:56614.service: Deactivated successfully. Dec 13 05:21:59.037447 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 05:21:59.038964 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Dec 13 05:21:59.040766 systemd-logind[1485]: Removed session 12. Dec 13 05:22:04.216753 systemd[1]: Started sshd@23-10.244.19.70:22-147.75.109.163:56630.service - OpenSSH per-connection server daemon (147.75.109.163:56630). Dec 13 05:22:05.163397 sshd[4194]: Accepted publickey for core from 147.75.109.163 port 56630 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:22:05.167021 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:22:05.177585 systemd-logind[1485]: New session 13 of user core. Dec 13 05:22:05.183524 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 05:22:05.938737 sshd[4194]: pam_unix(sshd:session): session closed for user core Dec 13 05:22:05.943394 systemd[1]: sshd@23-10.244.19.70:22-147.75.109.163:56630.service: Deactivated successfully. Dec 13 05:22:05.944071 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Dec 13 05:22:05.946805 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 05:22:05.949643 systemd-logind[1485]: Removed session 13. Dec 13 05:22:11.099615 systemd[1]: Started sshd@24-10.244.19.70:22-147.75.109.163:45958.service - OpenSSH per-connection server daemon (147.75.109.163:45958). Dec 13 05:22:11.993766 sshd[4210]: Accepted publickey for core from 147.75.109.163 port 45958 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:22:11.996305 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:22:12.003444 systemd-logind[1485]: New session 14 of user core. Dec 13 05:22:12.009367 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 05:22:12.706639 sshd[4210]: pam_unix(sshd:session): session closed for user core Dec 13 05:22:12.713310 systemd[1]: sshd@24-10.244.19.70:22-147.75.109.163:45958.service: Deactivated successfully. Dec 13 05:22:12.716710 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 05:22:12.718869 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Dec 13 05:22:12.721197 systemd-logind[1485]: Removed session 14. Dec 13 05:22:17.866536 systemd[1]: Started sshd@25-10.244.19.70:22-147.75.109.163:60992.service - OpenSSH per-connection server daemon (147.75.109.163:60992). Dec 13 05:22:18.782308 sshd[4224]: Accepted publickey for core from 147.75.109.163 port 60992 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:22:18.784648 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:22:18.792054 systemd-logind[1485]: New session 15 of user core. Dec 13 05:22:18.797390 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 05:22:19.518760 sshd[4224]: pam_unix(sshd:session): session closed for user core Dec 13 05:22:19.525420 systemd[1]: sshd@25-10.244.19.70:22-147.75.109.163:60992.service: Deactivated successfully. Dec 13 05:22:19.528354 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 05:22:19.529973 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Dec 13 05:22:19.532036 systemd-logind[1485]: Removed session 15. Dec 13 05:22:19.676804 systemd[1]: Started sshd@26-10.244.19.70:22-147.75.109.163:32768.service - OpenSSH per-connection server daemon (147.75.109.163:32768). Dec 13 05:22:20.574566 sshd[4237]: Accepted publickey for core from 147.75.109.163 port 32768 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:22:20.577115 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:22:20.584992 systemd-logind[1485]: New session 16 of user core. Dec 13 05:22:20.591408 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 05:22:21.388428 sshd[4237]: pam_unix(sshd:session): session closed for user core Dec 13 05:22:21.393808 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Dec 13 05:22:21.395171 systemd[1]: sshd@26-10.244.19.70:22-147.75.109.163:32768.service: Deactivated successfully. Dec 13 05:22:21.398037 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 05:22:21.400569 systemd-logind[1485]: Removed session 16. Dec 13 05:22:21.547656 systemd[1]: Started sshd@27-10.244.19.70:22-147.75.109.163:32784.service - OpenSSH per-connection server daemon (147.75.109.163:32784). Dec 13 05:22:22.475431 sshd[4248]: Accepted publickey for core from 147.75.109.163 port 32784 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:22:22.478018 sshd[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:22:22.502822 systemd-logind[1485]: New session 17 of user core. Dec 13 05:22:22.512578 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 05:22:23.226659 sshd[4248]: pam_unix(sshd:session): session closed for user core Dec 13 05:22:23.233503 systemd[1]: sshd@27-10.244.19.70:22-147.75.109.163:32784.service: Deactivated successfully. Dec 13 05:22:23.236631 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 05:22:23.237770 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Dec 13 05:22:23.239972 systemd-logind[1485]: Removed session 17. Dec 13 05:22:28.384560 systemd[1]: Started sshd@28-10.244.19.70:22-147.75.109.163:38490.service - OpenSSH per-connection server daemon (147.75.109.163:38490). Dec 13 05:22:29.278569 sshd[4261]: Accepted publickey for core from 147.75.109.163 port 38490 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:22:29.280925 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:22:29.288702 systemd-logind[1485]: New session 18 of user core. Dec 13 05:22:29.299378 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 05:22:29.986247 sshd[4261]: pam_unix(sshd:session): session closed for user core Dec 13 05:22:29.992016 systemd[1]: sshd@28-10.244.19.70:22-147.75.109.163:38490.service: Deactivated successfully. Dec 13 05:22:29.994597 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 05:22:29.995634 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Dec 13 05:22:29.998098 systemd-logind[1485]: Removed session 18. Dec 13 05:22:35.141679 systemd[1]: Started sshd@29-10.244.19.70:22-147.75.109.163:38506.service - OpenSSH per-connection server daemon (147.75.109.163:38506). Dec 13 05:22:36.051151 sshd[4275]: Accepted publickey for core from 147.75.109.163 port 38506 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:22:36.053511 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:22:36.068303 systemd-logind[1485]: New session 19 of user core. Dec 13 05:22:36.079599 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 05:22:36.762026 sshd[4275]: pam_unix(sshd:session): session closed for user core Dec 13 05:22:36.767840 systemd[1]: sshd@29-10.244.19.70:22-147.75.109.163:38506.service: Deactivated successfully. Dec 13 05:22:36.770517 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 05:22:36.771955 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Dec 13 05:22:36.774070 systemd-logind[1485]: Removed session 19. Dec 13 05:22:36.927620 systemd[1]: Started sshd@30-10.244.19.70:22-147.75.109.163:36732.service - OpenSSH per-connection server daemon (147.75.109.163:36732). Dec 13 05:22:37.856091 sshd[4288]: Accepted publickey for core from 147.75.109.163 port 36732 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:22:37.858533 sshd[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:22:37.867942 systemd-logind[1485]: New session 20 of user core. Dec 13 05:22:37.869415 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 05:22:38.867891 sshd[4288]: pam_unix(sshd:session): session closed for user core Dec 13 05:22:38.873856 systemd[1]: sshd@30-10.244.19.70:22-147.75.109.163:36732.service: Deactivated successfully. Dec 13 05:22:38.876859 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 05:22:38.880040 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Dec 13 05:22:38.882871 systemd-logind[1485]: Removed session 20. Dec 13 05:22:39.029647 systemd[1]: Started sshd@31-10.244.19.70:22-147.75.109.163:36744.service - OpenSSH per-connection server daemon (147.75.109.163:36744). Dec 13 05:22:39.929263 sshd[4299]: Accepted publickey for core from 147.75.109.163 port 36744 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:22:39.932255 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:22:39.940182 systemd-logind[1485]: New session 21 of user core. Dec 13 05:22:39.946326 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 05:22:42.902446 sshd[4299]: pam_unix(sshd:session): session closed for user core Dec 13 05:22:42.914722 systemd[1]: sshd@31-10.244.19.70:22-147.75.109.163:36744.service: Deactivated successfully. Dec 13 05:22:42.918598 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 05:22:42.921056 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Dec 13 05:22:42.923407 systemd-logind[1485]: Removed session 21. Dec 13 05:22:43.062621 systemd[1]: Started sshd@32-10.244.19.70:22-147.75.109.163:36754.service - OpenSSH per-connection server daemon (147.75.109.163:36754). Dec 13 05:22:43.991782 sshd[4317]: Accepted publickey for core from 147.75.109.163 port 36754 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:22:43.994021 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:22:44.000766 systemd-logind[1485]: New session 22 of user core. Dec 13 05:22:44.005344 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 05:22:45.032481 sshd[4317]: pam_unix(sshd:session): session closed for user core Dec 13 05:22:45.039898 systemd[1]: sshd@32-10.244.19.70:22-147.75.109.163:36754.service: Deactivated successfully. Dec 13 05:22:45.043760 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 05:22:45.045833 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Dec 13 05:22:45.048701 systemd-logind[1485]: Removed session 22. Dec 13 05:22:45.217483 systemd[1]: Started sshd@33-10.244.19.70:22-147.75.109.163:36762.service - OpenSSH per-connection server daemon (147.75.109.163:36762). Dec 13 05:22:46.112842 sshd[4332]: Accepted publickey for core from 147.75.109.163 port 36762 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:22:46.115208 sshd[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:22:46.122525 systemd-logind[1485]: New session 23 of user core. Dec 13 05:22:46.134429 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 05:22:46.830600 sshd[4332]: pam_unix(sshd:session): session closed for user core Dec 13 05:22:46.835000 systemd[1]: sshd@33-10.244.19.70:22-147.75.109.163:36762.service: Deactivated successfully. Dec 13 05:22:46.838777 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 05:22:46.840874 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Dec 13 05:22:46.842583 systemd-logind[1485]: Removed session 23. Dec 13 05:22:51.989501 systemd[1]: Started sshd@34-10.244.19.70:22-147.75.109.163:55944.service - OpenSSH per-connection server daemon (147.75.109.163:55944). Dec 13 05:22:52.886277 sshd[4350]: Accepted publickey for core from 147.75.109.163 port 55944 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:22:52.888449 sshd[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:22:52.897233 systemd-logind[1485]: New session 24 of user core. Dec 13 05:22:52.902406 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 05:22:53.583809 sshd[4350]: pam_unix(sshd:session): session closed for user core Dec 13 05:22:53.589197 systemd[1]: sshd@34-10.244.19.70:22-147.75.109.163:55944.service: Deactivated successfully. Dec 13 05:22:53.589294 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Dec 13 05:22:53.592431 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 05:22:53.594187 systemd-logind[1485]: Removed session 24. Dec 13 05:22:58.741443 systemd[1]: Started sshd@35-10.244.19.70:22-147.75.109.163:60922.service - OpenSSH per-connection server daemon (147.75.109.163:60922). Dec 13 05:22:59.633232 sshd[4363]: Accepted publickey for core from 147.75.109.163 port 60922 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:22:59.632923 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:22:59.643382 systemd-logind[1485]: New session 25 of user core. Dec 13 05:22:59.650419 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 05:23:00.331916 sshd[4363]: pam_unix(sshd:session): session closed for user core Dec 13 05:23:00.336857 systemd[1]: sshd@35-10.244.19.70:22-147.75.109.163:60922.service: Deactivated successfully. Dec 13 05:23:00.340094 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 05:23:00.341388 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. Dec 13 05:23:00.343249 systemd-logind[1485]: Removed session 25. Dec 13 05:23:05.493613 systemd[1]: Started sshd@36-10.244.19.70:22-147.75.109.163:60928.service - OpenSSH per-connection server daemon (147.75.109.163:60928). Dec 13 05:23:06.379607 sshd[4377]: Accepted publickey for core from 147.75.109.163 port 60928 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:23:06.382179 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:23:06.390063 systemd-logind[1485]: New session 26 of user core. Dec 13 05:23:06.398459 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 05:23:07.089829 sshd[4377]: pam_unix(sshd:session): session closed for user core Dec 13 05:23:07.095644 systemd-logind[1485]: Session 26 logged out. Waiting for processes to exit. Dec 13 05:23:07.097217 systemd[1]: sshd@36-10.244.19.70:22-147.75.109.163:60928.service: Deactivated successfully. Dec 13 05:23:07.100121 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 05:23:07.101755 systemd-logind[1485]: Removed session 26. Dec 13 05:23:07.249900 systemd[1]: Started sshd@37-10.244.19.70:22-147.75.109.163:46758.service - OpenSSH per-connection server daemon (147.75.109.163:46758). Dec 13 05:23:08.147238 sshd[4389]: Accepted publickey for core from 147.75.109.163 port 46758 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:23:08.150717 sshd[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:23:08.159329 systemd-logind[1485]: New session 27 of user core. Dec 13 05:23:08.168005 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 05:23:10.259134 containerd[1505]: time="2024-12-13T05:23:10.258991408Z" level=info msg="StopContainer for \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\" with timeout 30 (s)" Dec 13 05:23:10.268425 containerd[1505]: time="2024-12-13T05:23:10.267499020Z" level=info msg="Stop container \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\" with signal terminated" Dec 13 05:23:10.365598 systemd[1]: cri-containerd-208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15.scope: Deactivated successfully. Dec 13 05:23:10.410329 containerd[1505]: time="2024-12-13T05:23:10.409816393Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 05:23:10.417847 containerd[1505]: time="2024-12-13T05:23:10.417676064Z" level=info msg="StopContainer for \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\" with timeout 2 (s)" Dec 13 05:23:10.418407 containerd[1505]: time="2024-12-13T05:23:10.418360424Z" level=info msg="Stop container \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\" with signal terminated" Dec 13 05:23:10.432849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15-rootfs.mount: Deactivated successfully. Dec 13 05:23:10.442336 systemd-networkd[1430]: lxc_health: Link DOWN Dec 13 05:23:10.442414 systemd-networkd[1430]: lxc_health: Lost carrier Dec 13 05:23:10.447734 containerd[1505]: time="2024-12-13T05:23:10.447400352Z" level=info msg="shim disconnected" id=208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15 namespace=k8s.io Dec 13 05:23:10.447734 containerd[1505]: time="2024-12-13T05:23:10.447517104Z" level=warning msg="cleaning up after shim disconnected" id=208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15 namespace=k8s.io Dec 13 05:23:10.447734 containerd[1505]: time="2024-12-13T05:23:10.447546388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:23:10.466868 systemd[1]: cri-containerd-313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523.scope: Deactivated successfully. Dec 13 05:23:10.467279 systemd[1]: cri-containerd-313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523.scope: Consumed 10.694s CPU time. Dec 13 05:23:10.501199 containerd[1505]: time="2024-12-13T05:23:10.501129330Z" level=info msg="StopContainer for \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\" returns successfully" Dec 13 05:23:10.503540 containerd[1505]: time="2024-12-13T05:23:10.503503165Z" level=info msg="StopPodSandbox for \"5832df70b035b9e3dd9bc6360a568ebb2ec3ca4fa637ee911deb5ade9085e2df\"" Dec 13 05:23:10.503629 containerd[1505]: time="2024-12-13T05:23:10.503598371Z" level=info msg="Container to stop \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 05:23:10.506916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5832df70b035b9e3dd9bc6360a568ebb2ec3ca4fa637ee911deb5ade9085e2df-shm.mount: Deactivated successfully. Dec 13 05:23:10.524518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523-rootfs.mount: Deactivated successfully. Dec 13 05:23:10.525659 systemd[1]: cri-containerd-5832df70b035b9e3dd9bc6360a568ebb2ec3ca4fa637ee911deb5ade9085e2df.scope: Deactivated successfully. Dec 13 05:23:10.540406 containerd[1505]: time="2024-12-13T05:23:10.540306571Z" level=info msg="shim disconnected" id=313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523 namespace=k8s.io Dec 13 05:23:10.540825 containerd[1505]: time="2024-12-13T05:23:10.540794942Z" level=warning msg="cleaning up after shim disconnected" id=313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523 namespace=k8s.io Dec 13 05:23:10.540956 containerd[1505]: time="2024-12-13T05:23:10.540930248Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:23:10.576413 containerd[1505]: time="2024-12-13T05:23:10.576334030Z" level=info msg="StopContainer for \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\" returns successfully" Dec 13 05:23:10.579158 containerd[1505]: time="2024-12-13T05:23:10.579024857Z" level=info msg="StopPodSandbox for \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\"" Dec 13 05:23:10.579158 containerd[1505]: time="2024-12-13T05:23:10.579079502Z" level=info msg="Container to stop \"3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 05:23:10.579158 containerd[1505]: time="2024-12-13T05:23:10.579137929Z" level=info msg="Container to stop \"71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 05:23:10.579158 containerd[1505]: time="2024-12-13T05:23:10.579160676Z" level=info msg="Container to stop \"3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 05:23:10.579524 containerd[1505]: time="2024-12-13T05:23:10.579176717Z" level=info msg="Container to stop \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 05:23:10.579524 containerd[1505]: time="2024-12-13T05:23:10.579193971Z" level=info msg="Container to stop \"791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 05:23:10.586461 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17-shm.mount: Deactivated successfully. Dec 13 05:23:10.600311 systemd[1]: cri-containerd-c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17.scope: Deactivated successfully. Dec 13 05:23:10.615816 containerd[1505]: time="2024-12-13T05:23:10.615446161Z" level=info msg="shim disconnected" id=5832df70b035b9e3dd9bc6360a568ebb2ec3ca4fa637ee911deb5ade9085e2df namespace=k8s.io Dec 13 05:23:10.615816 containerd[1505]: time="2024-12-13T05:23:10.615817572Z" level=warning msg="cleaning up after shim disconnected" id=5832df70b035b9e3dd9bc6360a568ebb2ec3ca4fa637ee911deb5ade9085e2df namespace=k8s.io Dec 13 05:23:10.616346 containerd[1505]: time="2024-12-13T05:23:10.615845490Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:23:10.649321 containerd[1505]: time="2024-12-13T05:23:10.649205921Z" level=info msg="shim disconnected" id=c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17 namespace=k8s.io Dec 13 05:23:10.650073 containerd[1505]: time="2024-12-13T05:23:10.649830569Z" level=warning msg="cleaning up after shim disconnected" id=c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17 namespace=k8s.io Dec 13 05:23:10.650073 containerd[1505]: time="2024-12-13T05:23:10.649859106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:23:10.668399 containerd[1505]: time="2024-12-13T05:23:10.666759201Z" level=info msg="TearDown network for sandbox \"5832df70b035b9e3dd9bc6360a568ebb2ec3ca4fa637ee911deb5ade9085e2df\" successfully" Dec 13 05:23:10.668399 containerd[1505]: time="2024-12-13T05:23:10.668215586Z" level=info msg="StopPodSandbox for \"5832df70b035b9e3dd9bc6360a568ebb2ec3ca4fa637ee911deb5ade9085e2df\" returns successfully" Dec 13 05:23:10.680678 containerd[1505]: time="2024-12-13T05:23:10.680627295Z" level=info msg="TearDown network for sandbox \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\" successfully" Dec 13 05:23:10.680678 containerd[1505]: time="2024-12-13T05:23:10.680673159Z" level=info msg="StopPodSandbox for \"c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17\" returns successfully" Dec 13 05:23:10.824164 kubelet[2799]: I1213 05:23:10.823858 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bea47212-2600-4db7-953e-9eb3203f49f6-cilium-config-path\") pod \"bea47212-2600-4db7-953e-9eb3203f49f6\" (UID: \"bea47212-2600-4db7-953e-9eb3203f49f6\") " Dec 13 05:23:10.824164 kubelet[2799]: I1213 05:23:10.823938 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cilium-cgroup\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.824164 kubelet[2799]: I1213 05:23:10.823968 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-etc-cni-netd\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.824164 kubelet[2799]: I1213 05:23:10.823998 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfndz\" (UniqueName: \"kubernetes.io/projected/bea47212-2600-4db7-953e-9eb3203f49f6-kube-api-access-tfndz\") pod \"bea47212-2600-4db7-953e-9eb3203f49f6\" (UID: \"bea47212-2600-4db7-953e-9eb3203f49f6\") " Dec 13 05:23:10.824164 kubelet[2799]: I1213 05:23:10.824044 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cilium-config-path\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.824164 kubelet[2799]: I1213 05:23:10.824080 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-bpf-maps\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.825917 kubelet[2799]: I1213 05:23:10.824678 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-host-proc-sys-net\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.825917 kubelet[2799]: I1213 05:23:10.824716 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-xtables-lock\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.825917 kubelet[2799]: I1213 05:23:10.824745 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-hubble-tls\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.825917 kubelet[2799]: I1213 05:23:10.824826 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cni-path\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.825917 kubelet[2799]: I1213 05:23:10.824903 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-host-proc-sys-kernel\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.825917 kubelet[2799]: I1213 05:23:10.824933 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-lib-modules\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.826235 kubelet[2799]: I1213 05:23:10.824993 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjbn9\" (UniqueName: \"kubernetes.io/projected/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-kube-api-access-rjbn9\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.826235 kubelet[2799]: I1213 05:23:10.825023 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-hostproc\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.826235 kubelet[2799]: I1213 05:23:10.825326 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-clustermesh-secrets\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.826235 kubelet[2799]: I1213 05:23:10.825359 2799 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cilium-run\") pod \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\" (UID: \"6276f2e1-98e8-4d0a-912c-d96a7e4a7546\") " Dec 13 05:23:10.827998 kubelet[2799]: I1213 05:23:10.825483 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 05:23:10.827998 kubelet[2799]: I1213 05:23:10.827527 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 05:23:10.827998 kubelet[2799]: I1213 05:23:10.827561 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 05:23:10.832604 kubelet[2799]: I1213 05:23:10.832548 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea47212-2600-4db7-953e-9eb3203f49f6-kube-api-access-tfndz" (OuterVolumeSpecName: "kube-api-access-tfndz") pod "bea47212-2600-4db7-953e-9eb3203f49f6" (UID: "bea47212-2600-4db7-953e-9eb3203f49f6"). InnerVolumeSpecName "kube-api-access-tfndz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 05:23:10.836770 kubelet[2799]: I1213 05:23:10.836739 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 05:23:10.836995 kubelet[2799]: I1213 05:23:10.836968 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 05:23:10.837144 kubelet[2799]: I1213 05:23:10.837115 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 05:23:10.837619 kubelet[2799]: I1213 05:23:10.837252 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 05:23:10.838636 kubelet[2799]: I1213 05:23:10.838595 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea47212-2600-4db7-953e-9eb3203f49f6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bea47212-2600-4db7-953e-9eb3203f49f6" (UID: "bea47212-2600-4db7-953e-9eb3203f49f6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 05:23:10.839845 kubelet[2799]: I1213 05:23:10.839811 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 05:23:10.839928 kubelet[2799]: I1213 05:23:10.839863 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cni-path" (OuterVolumeSpecName: "cni-path") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 05:23:10.839928 kubelet[2799]: I1213 05:23:10.839897 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 05:23:10.840025 kubelet[2799]: I1213 05:23:10.839935 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-hostproc" (OuterVolumeSpecName: "hostproc") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 05:23:10.841609 kubelet[2799]: I1213 05:23:10.841466 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 05:23:10.843763 kubelet[2799]: I1213 05:23:10.843509 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-kube-api-access-rjbn9" (OuterVolumeSpecName: "kube-api-access-rjbn9") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "kube-api-access-rjbn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 05:23:10.844898 kubelet[2799]: I1213 05:23:10.844841 2799 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6276f2e1-98e8-4d0a-912c-d96a7e4a7546" (UID: "6276f2e1-98e8-4d0a-912c-d96a7e4a7546"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 05:23:10.927513 kubelet[2799]: I1213 05:23:10.927443 2799 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-xtables-lock\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928099 kubelet[2799]: I1213 05:23:10.927822 2799 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-hubble-tls\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928099 kubelet[2799]: I1213 05:23:10.927850 2799 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cni-path\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928099 kubelet[2799]: I1213 05:23:10.927873 2799 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-host-proc-sys-kernel\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928099 kubelet[2799]: I1213 05:23:10.927892 2799 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-lib-modules\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928099 kubelet[2799]: I1213 05:23:10.927909 2799 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-hostproc\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928099 kubelet[2799]: I1213 05:23:10.927927 2799 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-clustermesh-secrets\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928099 kubelet[2799]: I1213 05:23:10.927942 2799 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cilium-run\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928099 kubelet[2799]: I1213 05:23:10.927957 2799 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rjbn9\" (UniqueName: \"kubernetes.io/projected/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-kube-api-access-rjbn9\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928709 kubelet[2799]: I1213 05:23:10.927972 2799 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cilium-cgroup\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928709 kubelet[2799]: I1213 05:23:10.927990 2799 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-etc-cni-netd\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928709 kubelet[2799]: I1213 05:23:10.928006 2799 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bea47212-2600-4db7-953e-9eb3203f49f6-cilium-config-path\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928709 kubelet[2799]: I1213 05:23:10.928021 2799 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-cilium-config-path\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928709 kubelet[2799]: I1213 05:23:10.928039 2799 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-bpf-maps\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928709 kubelet[2799]: I1213 05:23:10.928056 2799 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6276f2e1-98e8-4d0a-912c-d96a7e4a7546-host-proc-sys-net\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:10.928709 kubelet[2799]: I1213 05:23:10.928070 2799 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tfndz\" (UniqueName: \"kubernetes.io/projected/bea47212-2600-4db7-953e-9eb3203f49f6-kube-api-access-tfndz\") on node \"srv-ch81y.gb1.brightbox.com\" DevicePath \"\"" Dec 13 05:23:11.045590 kubelet[2799]: I1213 05:23:11.045175 2799 scope.go:117] "RemoveContainer" containerID="208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15" Dec 13 05:23:11.046068 systemd[1]: Removed slice kubepods-besteffort-podbea47212_2600_4db7_953e_9eb3203f49f6.slice - libcontainer container kubepods-besteffort-podbea47212_2600_4db7_953e_9eb3203f49f6.slice. Dec 13 05:23:11.058567 containerd[1505]: time="2024-12-13T05:23:11.058331666Z" level=info msg="RemoveContainer for \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\"" Dec 13 05:23:11.071810 containerd[1505]: time="2024-12-13T05:23:11.071263266Z" level=info msg="RemoveContainer for \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\" returns successfully" Dec 13 05:23:11.072081 systemd[1]: Removed slice kubepods-burstable-pod6276f2e1_98e8_4d0a_912c_d96a7e4a7546.slice - libcontainer container kubepods-burstable-pod6276f2e1_98e8_4d0a_912c_d96a7e4a7546.slice. Dec 13 05:23:11.072247 systemd[1]: kubepods-burstable-pod6276f2e1_98e8_4d0a_912c_d96a7e4a7546.slice: Consumed 10.830s CPU time. Dec 13 05:23:11.075356 kubelet[2799]: I1213 05:23:11.072631 2799 scope.go:117] "RemoveContainer" containerID="208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15" Dec 13 05:23:11.095137 containerd[1505]: time="2024-12-13T05:23:11.075261850Z" level=error msg="ContainerStatus for \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\": not found" Dec 13 05:23:11.103207 kubelet[2799]: E1213 05:23:11.102921 2799 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\": not found" containerID="208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15" Dec 13 05:23:11.129563 kubelet[2799]: I1213 05:23:11.103413 2799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15"} err="failed to get container status \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\": rpc error: code = NotFound desc = an error occurred when try to find container \"208892098bee166a35438a6b07ff1a0b28fc76a0390e51d0c5fbcdfdfbf45f15\": not found" Dec 13 05:23:11.129563 kubelet[2799]: I1213 05:23:11.129351 2799 scope.go:117] "RemoveContainer" containerID="313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523" Dec 13 05:23:11.131769 containerd[1505]: time="2024-12-13T05:23:11.131720128Z" level=info msg="RemoveContainer for \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\"" Dec 13 05:23:11.137707 containerd[1505]: time="2024-12-13T05:23:11.135866459Z" level=info msg="RemoveContainer for \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\" returns successfully" Dec 13 05:23:11.137921 kubelet[2799]: I1213 05:23:11.136243 2799 scope.go:117] "RemoveContainer" containerID="3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931" Dec 13 05:23:11.140304 containerd[1505]: time="2024-12-13T05:23:11.139839013Z" level=info msg="RemoveContainer for \"3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931\"" Dec 13 05:23:11.152055 containerd[1505]: time="2024-12-13T05:23:11.151883571Z" level=info msg="RemoveContainer for \"3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931\" returns successfully" Dec 13 05:23:11.152469 kubelet[2799]: I1213 05:23:11.152306 2799 scope.go:117] "RemoveContainer" containerID="3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581" Dec 13 05:23:11.154134 containerd[1505]: time="2024-12-13T05:23:11.154069783Z" level=info msg="RemoveContainer for \"3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581\"" Dec 13 05:23:11.157344 containerd[1505]: time="2024-12-13T05:23:11.157299254Z" level=info msg="RemoveContainer for \"3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581\" returns successfully" Dec 13 05:23:11.158245 kubelet[2799]: I1213 05:23:11.157744 2799 scope.go:117] "RemoveContainer" containerID="71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad" Dec 13 05:23:11.159460 containerd[1505]: time="2024-12-13T05:23:11.159378154Z" level=info msg="RemoveContainer for \"71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad\"" Dec 13 05:23:11.163097 containerd[1505]: time="2024-12-13T05:23:11.163021039Z" level=info msg="RemoveContainer for \"71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad\" returns successfully" Dec 13 05:23:11.164594 kubelet[2799]: I1213 05:23:11.164396 2799 scope.go:117] "RemoveContainer" containerID="791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547" Dec 13 05:23:11.168641 containerd[1505]: time="2024-12-13T05:23:11.168588585Z" level=info msg="RemoveContainer for \"791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547\"" Dec 13 05:23:11.173560 containerd[1505]: time="2024-12-13T05:23:11.172266476Z" level=info msg="RemoveContainer for \"791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547\" returns successfully" Dec 13 05:23:11.173758 kubelet[2799]: I1213 05:23:11.173350 2799 scope.go:117] "RemoveContainer" containerID="313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523" Dec 13 05:23:11.174970 containerd[1505]: time="2024-12-13T05:23:11.174835058Z" level=error msg="ContainerStatus for \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\": not found" Dec 13 05:23:11.175568 kubelet[2799]: E1213 05:23:11.175387 2799 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\": not found" containerID="313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523" Dec 13 05:23:11.175568 kubelet[2799]: I1213 05:23:11.175440 2799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523"} err="failed to get container status \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\": rpc error: code = NotFound desc = an error occurred when try to find container \"313d5a42a4f39ca65d10c277550439a2cdd8c98200a61ea77ce52af2d9efe523\": not found" Dec 13 05:23:11.175568 kubelet[2799]: I1213 05:23:11.175474 2799 scope.go:117] "RemoveContainer" containerID="3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931" Dec 13 05:23:11.175933 containerd[1505]: time="2024-12-13T05:23:11.175836817Z" level=error msg="ContainerStatus for \"3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931\": not found" Dec 13 05:23:11.176183 kubelet[2799]: E1213 05:23:11.176126 2799 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931\": not found" containerID="3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931" Dec 13 05:23:11.176646 kubelet[2799]: I1213 05:23:11.176169 2799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931"} err="failed to get container status \"3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931\": rpc error: code = NotFound desc = an error occurred when try to find container \"3811c083efd5028e2b52745662461609e50dd1d8b5dd3be643c145e66cfc1931\": not found" Dec 13 05:23:11.176646 kubelet[2799]: I1213 05:23:11.176204 2799 scope.go:117] "RemoveContainer" containerID="3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581" Dec 13 05:23:11.176985 containerd[1505]: time="2024-12-13T05:23:11.176501162Z" level=error msg="ContainerStatus for \"3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581\": not found" Dec 13 05:23:11.177098 kubelet[2799]: E1213 05:23:11.176645 2799 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581\": not found" containerID="3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581" Dec 13 05:23:11.177098 kubelet[2799]: I1213 05:23:11.176673 2799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581"} err="failed to get container status \"3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f47d4f7240de44f6eeab8bab72a255b6d8c43540593be4a3fcd57e414fee581\": not found" Dec 13 05:23:11.177098 kubelet[2799]: I1213 05:23:11.176695 2799 scope.go:117] "RemoveContainer" containerID="71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad" Dec 13 05:23:11.178900 containerd[1505]: time="2024-12-13T05:23:11.177363923Z" level=error msg="ContainerStatus for \"71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad\": not found" Dec 13 05:23:11.179224 kubelet[2799]: E1213 05:23:11.178960 2799 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad\": not found" containerID="71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad" Dec 13 05:23:11.179458 kubelet[2799]: I1213 05:23:11.179051 2799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad"} err="failed to get container status \"71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad\": rpc error: code = NotFound desc = an error occurred when try to find container \"71e48334f108338e613d6464ca5f280a137b9d2e79b53d1fa485b5a30b5adfad\": not found" Dec 13 05:23:11.179458 kubelet[2799]: I1213 05:23:11.179286 2799 scope.go:117] "RemoveContainer" containerID="791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547" Dec 13 05:23:11.179946 kubelet[2799]: E1213 05:23:11.179907 2799 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547\": not found" containerID="791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547" Dec 13 05:23:11.180005 containerd[1505]: time="2024-12-13T05:23:11.179729853Z" level=error msg="ContainerStatus for \"791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547\": not found" Dec 13 05:23:11.180057 kubelet[2799]: I1213 05:23:11.179950 2799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547"} err="failed to get container status \"791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547\": rpc error: code = NotFound desc = an error occurred when try to find container \"791484fb33ed25b6f9cec649055613e5fe32a8d3b5b8ba6c1c131be919187547\": not found" Dec 13 05:23:11.359907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c161b974f9dadfd81bf9d8b2149787ffcf80ee761b33629a95ab439fb8b82d17-rootfs.mount: Deactivated successfully. Dec 13 05:23:11.360408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5832df70b035b9e3dd9bc6360a568ebb2ec3ca4fa637ee911deb5ade9085e2df-rootfs.mount: Deactivated successfully. Dec 13 05:23:11.360949 systemd[1]: var-lib-kubelet-pods-bea47212\x2d2600\x2d4db7\x2d953e\x2d9eb3203f49f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtfndz.mount: Deactivated successfully. Dec 13 05:23:11.361199 systemd[1]: var-lib-kubelet-pods-6276f2e1\x2d98e8\x2d4d0a\x2d912c\x2dd96a7e4a7546-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drjbn9.mount: Deactivated successfully. Dec 13 05:23:11.361311 systemd[1]: var-lib-kubelet-pods-6276f2e1\x2d98e8\x2d4d0a\x2d912c\x2dd96a7e4a7546-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 05:23:11.361427 systemd[1]: var-lib-kubelet-pods-6276f2e1\x2d98e8\x2d4d0a\x2d912c\x2dd96a7e4a7546-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 05:23:11.450159 kubelet[2799]: I1213 05:23:11.449516 2799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6276f2e1-98e8-4d0a-912c-d96a7e4a7546" path="/var/lib/kubelet/pods/6276f2e1-98e8-4d0a-912c-d96a7e4a7546/volumes" Dec 13 05:23:11.451128 kubelet[2799]: I1213 05:23:11.451085 2799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea47212-2600-4db7-953e-9eb3203f49f6" path="/var/lib/kubelet/pods/bea47212-2600-4db7-953e-9eb3203f49f6/volumes" Dec 13 05:23:12.324590 sshd[4389]: pam_unix(sshd:session): session closed for user core Dec 13 05:23:12.329733 systemd[1]: sshd@37-10.244.19.70:22-147.75.109.163:46758.service: Deactivated successfully. Dec 13 05:23:12.332992 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 05:23:12.334925 systemd-logind[1485]: Session 27 logged out. Waiting for processes to exit. Dec 13 05:23:12.336851 systemd-logind[1485]: Removed session 27. Dec 13 05:23:12.485501 systemd[1]: Started sshd@38-10.244.19.70:22-147.75.109.163:46774.service - OpenSSH per-connection server daemon (147.75.109.163:46774). Dec 13 05:23:12.626394 kubelet[2799]: E1213 05:23:12.626171 2799 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 05:23:13.381161 sshd[4547]: Accepted publickey for core from 147.75.109.163 port 46774 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:23:13.382689 sshd[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:23:13.389271 systemd-logind[1485]: New session 28 of user core. Dec 13 05:23:13.396321 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 05:23:14.619372 kubelet[2799]: I1213 05:23:14.619299 2799 topology_manager.go:215] "Topology Admit Handler" podUID="676d1ede-c82f-47c0-8aa1-50bd8cbab06e" podNamespace="kube-system" podName="cilium-ds29r" Dec 13 05:23:14.620059 kubelet[2799]: E1213 05:23:14.619443 2799 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bea47212-2600-4db7-953e-9eb3203f49f6" containerName="cilium-operator" Dec 13 05:23:14.620059 kubelet[2799]: E1213 05:23:14.619465 2799 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6276f2e1-98e8-4d0a-912c-d96a7e4a7546" containerName="clean-cilium-state" Dec 13 05:23:14.620059 kubelet[2799]: E1213 05:23:14.619476 2799 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6276f2e1-98e8-4d0a-912c-d96a7e4a7546" containerName="cilium-agent" Dec 13 05:23:14.620059 kubelet[2799]: E1213 05:23:14.619487 2799 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6276f2e1-98e8-4d0a-912c-d96a7e4a7546" containerName="mount-cgroup" Dec 13 05:23:14.620059 kubelet[2799]: E1213 05:23:14.619497 2799 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6276f2e1-98e8-4d0a-912c-d96a7e4a7546" containerName="apply-sysctl-overwrites" Dec 13 05:23:14.620059 kubelet[2799]: E1213 05:23:14.619508 2799 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6276f2e1-98e8-4d0a-912c-d96a7e4a7546" containerName="mount-bpf-fs" Dec 13 05:23:14.627790 kubelet[2799]: I1213 05:23:14.619580 2799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6276f2e1-98e8-4d0a-912c-d96a7e4a7546" containerName="cilium-agent" Dec 13 05:23:14.628024 kubelet[2799]: I1213 05:23:14.627816 2799 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea47212-2600-4db7-953e-9eb3203f49f6" containerName="cilium-operator" Dec 13 05:23:14.646771 systemd[1]: Created slice kubepods-burstable-pod676d1ede_c82f_47c0_8aa1_50bd8cbab06e.slice - libcontainer container kubepods-burstable-pod676d1ede_c82f_47c0_8aa1_50bd8cbab06e.slice. Dec 13 05:23:14.740579 sshd[4547]: pam_unix(sshd:session): session closed for user core Dec 13 05:23:14.745020 systemd-logind[1485]: Session 28 logged out. Waiting for processes to exit. Dec 13 05:23:14.745620 systemd[1]: sshd@38-10.244.19.70:22-147.75.109.163:46774.service: Deactivated successfully. Dec 13 05:23:14.748273 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 05:23:14.751322 systemd-logind[1485]: Removed session 28. Dec 13 05:23:14.759058 kubelet[2799]: I1213 05:23:14.758372 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-etc-cni-netd\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759058 kubelet[2799]: I1213 05:23:14.758437 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-xtables-lock\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759058 kubelet[2799]: I1213 05:23:14.758470 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-cilium-config-path\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759058 kubelet[2799]: I1213 05:23:14.758499 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-bpf-maps\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759058 kubelet[2799]: I1213 05:23:14.758539 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-clustermesh-secrets\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759058 kubelet[2799]: I1213 05:23:14.758569 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-host-proc-sys-net\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759461 kubelet[2799]: I1213 05:23:14.758596 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-cilium-run\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759461 kubelet[2799]: I1213 05:23:14.758632 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-hostproc\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759461 kubelet[2799]: I1213 05:23:14.758666 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-lib-modules\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759461 kubelet[2799]: I1213 05:23:14.758695 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-cilium-ipsec-secrets\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759461 kubelet[2799]: I1213 05:23:14.758723 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-host-proc-sys-kernel\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759461 kubelet[2799]: I1213 05:23:14.758777 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-hubble-tls\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759725 kubelet[2799]: I1213 05:23:14.758809 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-cilium-cgroup\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759725 kubelet[2799]: I1213 05:23:14.758835 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-cni-path\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.759725 kubelet[2799]: I1213 05:23:14.758863 2799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfc74\" (UniqueName: \"kubernetes.io/projected/676d1ede-c82f-47c0-8aa1-50bd8cbab06e-kube-api-access-dfc74\") pod \"cilium-ds29r\" (UID: \"676d1ede-c82f-47c0-8aa1-50bd8cbab06e\") " pod="kube-system/cilium-ds29r" Dec 13 05:23:14.914098 systemd[1]: Started sshd@39-10.244.19.70:22-147.75.109.163:46778.service - OpenSSH per-connection server daemon (147.75.109.163:46778). Dec 13 05:23:14.964880 containerd[1505]: time="2024-12-13T05:23:14.964750662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ds29r,Uid:676d1ede-c82f-47c0-8aa1-50bd8cbab06e,Namespace:kube-system,Attempt:0,}" Dec 13 05:23:14.996895 containerd[1505]: time="2024-12-13T05:23:14.996564143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:23:14.996895 containerd[1505]: time="2024-12-13T05:23:14.996664260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:23:14.996895 containerd[1505]: time="2024-12-13T05:23:14.996704970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:23:14.996895 containerd[1505]: time="2024-12-13T05:23:14.996822903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:23:15.026393 systemd[1]: Started cri-containerd-1f0b9f6493a8b91b63c9637d85a37e3aa317a776f5e3938b11c856bb7c042596.scope - libcontainer container 1f0b9f6493a8b91b63c9637d85a37e3aa317a776f5e3938b11c856bb7c042596. Dec 13 05:23:15.065336 containerd[1505]: time="2024-12-13T05:23:15.065261722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ds29r,Uid:676d1ede-c82f-47c0-8aa1-50bd8cbab06e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f0b9f6493a8b91b63c9637d85a37e3aa317a776f5e3938b11c856bb7c042596\"" Dec 13 05:23:15.073911 containerd[1505]: time="2024-12-13T05:23:15.073593816Z" level=info msg="CreateContainer within sandbox \"1f0b9f6493a8b91b63c9637d85a37e3aa317a776f5e3938b11c856bb7c042596\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 05:23:15.087563 containerd[1505]: time="2024-12-13T05:23:15.087487619Z" level=info msg="CreateContainer within sandbox \"1f0b9f6493a8b91b63c9637d85a37e3aa317a776f5e3938b11c856bb7c042596\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c6f7983062df74c78ed53237e62770ce175b53bd5b2036791d7480b36a1c34b6\"" Dec 13 05:23:15.089240 containerd[1505]: time="2024-12-13T05:23:15.088501977Z" level=info msg="StartContainer for \"c6f7983062df74c78ed53237e62770ce175b53bd5b2036791d7480b36a1c34b6\"" Dec 13 05:23:15.128319 systemd[1]: Started cri-containerd-c6f7983062df74c78ed53237e62770ce175b53bd5b2036791d7480b36a1c34b6.scope - libcontainer container c6f7983062df74c78ed53237e62770ce175b53bd5b2036791d7480b36a1c34b6. Dec 13 05:23:15.172676 containerd[1505]: time="2024-12-13T05:23:15.172505776Z" level=info msg="StartContainer for \"c6f7983062df74c78ed53237e62770ce175b53bd5b2036791d7480b36a1c34b6\" returns successfully" Dec 13 05:23:15.188083 systemd[1]: cri-containerd-c6f7983062df74c78ed53237e62770ce175b53bd5b2036791d7480b36a1c34b6.scope: Deactivated successfully. Dec 13 05:23:15.234827 containerd[1505]: time="2024-12-13T05:23:15.234248955Z" level=info msg="shim disconnected" id=c6f7983062df74c78ed53237e62770ce175b53bd5b2036791d7480b36a1c34b6 namespace=k8s.io Dec 13 05:23:15.234827 containerd[1505]: time="2024-12-13T05:23:15.234349640Z" level=warning msg="cleaning up after shim disconnected" id=c6f7983062df74c78ed53237e62770ce175b53bd5b2036791d7480b36a1c34b6 namespace=k8s.io Dec 13 05:23:15.234827 containerd[1505]: time="2024-12-13T05:23:15.234366468Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:23:15.806401 sshd[4562]: Accepted publickey for core from 147.75.109.163 port 46778 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:23:15.809192 sshd[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:23:15.815399 systemd-logind[1485]: New session 29 of user core. Dec 13 05:23:15.831402 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 05:23:16.088094 containerd[1505]: time="2024-12-13T05:23:16.087742028Z" level=info msg="CreateContainer within sandbox \"1f0b9f6493a8b91b63c9637d85a37e3aa317a776f5e3938b11c856bb7c042596\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 05:23:16.103513 containerd[1505]: time="2024-12-13T05:23:16.103458925Z" level=info msg="CreateContainer within sandbox \"1f0b9f6493a8b91b63c9637d85a37e3aa317a776f5e3938b11c856bb7c042596\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4e86aa1d8477ea833a37bcb2274c26c984bf6bf7778f3b3763add44ee346ed06\"" Dec 13 05:23:16.107075 containerd[1505]: time="2024-12-13T05:23:16.105834306Z" level=info msg="StartContainer for \"4e86aa1d8477ea833a37bcb2274c26c984bf6bf7778f3b3763add44ee346ed06\"" Dec 13 05:23:16.163869 systemd[1]: Started cri-containerd-4e86aa1d8477ea833a37bcb2274c26c984bf6bf7778f3b3763add44ee346ed06.scope - libcontainer container 4e86aa1d8477ea833a37bcb2274c26c984bf6bf7778f3b3763add44ee346ed06. Dec 13 05:23:16.205657 containerd[1505]: time="2024-12-13T05:23:16.205543798Z" level=info msg="StartContainer for \"4e86aa1d8477ea833a37bcb2274c26c984bf6bf7778f3b3763add44ee346ed06\" returns successfully" Dec 13 05:23:16.226783 systemd[1]: cri-containerd-4e86aa1d8477ea833a37bcb2274c26c984bf6bf7778f3b3763add44ee346ed06.scope: Deactivated successfully. Dec 13 05:23:16.267713 containerd[1505]: time="2024-12-13T05:23:16.267485357Z" level=info msg="shim disconnected" id=4e86aa1d8477ea833a37bcb2274c26c984bf6bf7778f3b3763add44ee346ed06 namespace=k8s.io Dec 13 05:23:16.267713 containerd[1505]: time="2024-12-13T05:23:16.267648600Z" level=warning msg="cleaning up after shim disconnected" id=4e86aa1d8477ea833a37bcb2274c26c984bf6bf7778f3b3763add44ee346ed06 namespace=k8s.io Dec 13 05:23:16.267713 containerd[1505]: time="2024-12-13T05:23:16.267673264Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:23:16.427015 sshd[4562]: pam_unix(sshd:session): session closed for user core Dec 13 05:23:16.433380 systemd[1]: sshd@39-10.244.19.70:22-147.75.109.163:46778.service: Deactivated successfully. Dec 13 05:23:16.435831 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 05:23:16.436900 systemd-logind[1485]: Session 29 logged out. Waiting for processes to exit. Dec 13 05:23:16.438569 systemd-logind[1485]: Removed session 29. Dec 13 05:23:16.588617 systemd[1]: Started sshd@40-10.244.19.70:22-147.75.109.163:57898.service - OpenSSH per-connection server daemon (147.75.109.163:57898). Dec 13 05:23:16.871322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e86aa1d8477ea833a37bcb2274c26c984bf6bf7778f3b3763add44ee346ed06-rootfs.mount: Deactivated successfully. Dec 13 05:23:17.090647 containerd[1505]: time="2024-12-13T05:23:17.090382123Z" level=info msg="CreateContainer within sandbox \"1f0b9f6493a8b91b63c9637d85a37e3aa317a776f5e3938b11c856bb7c042596\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 05:23:17.114485 containerd[1505]: time="2024-12-13T05:23:17.112231386Z" level=info msg="CreateContainer within sandbox \"1f0b9f6493a8b91b63c9637d85a37e3aa317a776f5e3938b11c856bb7c042596\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"de784b1e726afd866cbb4535d342d7b18400ceb2d6e41ceefd2a469e2ab74a47\"" Dec 13 05:23:17.116528 containerd[1505]: time="2024-12-13T05:23:17.116479820Z" level=info msg="StartContainer for \"de784b1e726afd866cbb4535d342d7b18400ceb2d6e41ceefd2a469e2ab74a47\"" Dec 13 05:23:17.175905 systemd[1]: run-containerd-runc-k8s.io-de784b1e726afd866cbb4535d342d7b18400ceb2d6e41ceefd2a469e2ab74a47-runc.TVhmhA.mount: Deactivated successfully. Dec 13 05:23:17.182990 systemd[1]: Started cri-containerd-de784b1e726afd866cbb4535d342d7b18400ceb2d6e41ceefd2a469e2ab74a47.scope - libcontainer container de784b1e726afd866cbb4535d342d7b18400ceb2d6e41ceefd2a469e2ab74a47. Dec 13 05:23:17.232785 containerd[1505]: time="2024-12-13T05:23:17.232071495Z" level=info msg="StartContainer for \"de784b1e726afd866cbb4535d342d7b18400ceb2d6e41ceefd2a469e2ab74a47\" returns successfully" Dec 13 05:23:17.241262 systemd[1]: cri-containerd-de784b1e726afd866cbb4535d342d7b18400ceb2d6e41ceefd2a469e2ab74a47.scope: Deactivated successfully. Dec 13 05:23:17.279048 containerd[1505]: time="2024-12-13T05:23:17.278928011Z" level=info msg="shim disconnected" id=de784b1e726afd866cbb4535d342d7b18400ceb2d6e41ceefd2a469e2ab74a47 namespace=k8s.io Dec 13 05:23:17.279048 containerd[1505]: time="2024-12-13T05:23:17.279035504Z" level=warning msg="cleaning up after shim disconnected" id=de784b1e726afd866cbb4535d342d7b18400ceb2d6e41ceefd2a469e2ab74a47 namespace=k8s.io Dec 13 05:23:17.279395 containerd[1505]: time="2024-12-13T05:23:17.279053985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:23:17.493445 sshd[4729]: Accepted publickey for core from 147.75.109.163 port 57898 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:23:17.495817 sshd[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:23:17.503288 systemd-logind[1485]: New session 30 of user core. Dec 13 05:23:17.509294 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 05:23:17.627414 kubelet[2799]: E1213 05:23:17.627227 2799 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 05:23:17.871305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de784b1e726afd866cbb4535d342d7b18400ceb2d6e41ceefd2a469e2ab74a47-rootfs.mount: Deactivated successfully. Dec 13 05:23:18.103518 containerd[1505]: time="2024-12-13T05:23:18.101740168Z" level=info msg="CreateContainer within sandbox \"1f0b9f6493a8b91b63c9637d85a37e3aa317a776f5e3938b11c856bb7c042596\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 05:23:18.143629 containerd[1505]: time="2024-12-13T05:23:18.142880713Z" level=info msg="CreateContainer within sandbox \"1f0b9f6493a8b91b63c9637d85a37e3aa317a776f5e3938b11c856bb7c042596\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"10d46697085350b824648ed42a41b7454ab075b0f7abba7734719928b52053b2\"" Dec 13 05:23:18.144577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3404806706.mount: Deactivated successfully. Dec 13 05:23:18.146429 containerd[1505]: time="2024-12-13T05:23:18.146338335Z" level=info msg="StartContainer for \"10d46697085350b824648ed42a41b7454ab075b0f7abba7734719928b52053b2\"" Dec 13 05:23:18.196349 systemd[1]: Started cri-containerd-10d46697085350b824648ed42a41b7454ab075b0f7abba7734719928b52053b2.scope - libcontainer container 10d46697085350b824648ed42a41b7454ab075b0f7abba7734719928b52053b2. Dec 13 05:23:18.232580 systemd[1]: cri-containerd-10d46697085350b824648ed42a41b7454ab075b0f7abba7734719928b52053b2.scope: Deactivated successfully. Dec 13 05:23:18.235059 containerd[1505]: time="2024-12-13T05:23:18.234574889Z" level=info msg="StartContainer for \"10d46697085350b824648ed42a41b7454ab075b0f7abba7734719928b52053b2\" returns successfully" Dec 13 05:23:18.268812 containerd[1505]: time="2024-12-13T05:23:18.268454501Z" level=info msg="shim disconnected" id=10d46697085350b824648ed42a41b7454ab075b0f7abba7734719928b52053b2 namespace=k8s.io Dec 13 05:23:18.268812 containerd[1505]: time="2024-12-13T05:23:18.268570899Z" level=warning msg="cleaning up after shim disconnected" id=10d46697085350b824648ed42a41b7454ab075b0f7abba7734719928b52053b2 namespace=k8s.io Dec 13 05:23:18.268812 containerd[1505]: time="2024-12-13T05:23:18.268588731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:23:18.871598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10d46697085350b824648ed42a41b7454ab075b0f7abba7734719928b52053b2-rootfs.mount: Deactivated successfully. Dec 13 05:23:19.114325 containerd[1505]: time="2024-12-13T05:23:19.114264288Z" level=info msg="CreateContainer within sandbox \"1f0b9f6493a8b91b63c9637d85a37e3aa317a776f5e3938b11c856bb7c042596\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 05:23:19.140186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3425941273.mount: Deactivated successfully. Dec 13 05:23:19.148439 containerd[1505]: time="2024-12-13T05:23:19.148349628Z" level=info msg="CreateContainer within sandbox \"1f0b9f6493a8b91b63c9637d85a37e3aa317a776f5e3938b11c856bb7c042596\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9655ba5b7fc7e66f3e2429663edb1372a7d7b413d92cfaaf8a76f92ee78fe182\"" Dec 13 05:23:19.150587 containerd[1505]: time="2024-12-13T05:23:19.149581801Z" level=info msg="StartContainer for \"9655ba5b7fc7e66f3e2429663edb1372a7d7b413d92cfaaf8a76f92ee78fe182\"" Dec 13 05:23:19.201878 systemd[1]: Started cri-containerd-9655ba5b7fc7e66f3e2429663edb1372a7d7b413d92cfaaf8a76f92ee78fe182.scope - libcontainer container 9655ba5b7fc7e66f3e2429663edb1372a7d7b413d92cfaaf8a76f92ee78fe182. Dec 13 05:23:19.261140 containerd[1505]: time="2024-12-13T05:23:19.260483177Z" level=info msg="StartContainer for \"9655ba5b7fc7e66f3e2429663edb1372a7d7b413d92cfaaf8a76f92ee78fe182\" returns successfully" Dec 13 05:23:19.875333 systemd[1]: run-containerd-runc-k8s.io-9655ba5b7fc7e66f3e2429663edb1372a7d7b413d92cfaaf8a76f92ee78fe182-runc.04EEIv.mount: Deactivated successfully. Dec 13 05:23:19.932014 kubelet[2799]: I1213 05:23:19.931174 2799 setters.go:580] "Node became not ready" node="srv-ch81y.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T05:23:19Z","lastTransitionTime":"2024-12-13T05:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 05:23:19.945222 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 05:23:22.817070 kubelet[2799]: E1213 05:23:22.816752 2799 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:53748->127.0.0.1:46115: write tcp 10.244.19.70:10250->10.244.19.70:55580: write: connection reset by peer Dec 13 05:23:22.819180 kubelet[2799]: E1213 05:23:22.816753 2799 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53748->127.0.0.1:46115: write tcp 127.0.0.1:53748->127.0.0.1:46115: write: broken pipe Dec 13 05:23:23.851060 systemd-networkd[1430]: lxc_health: Link UP Dec 13 05:23:23.867438 systemd-networkd[1430]: lxc_health: Gained carrier Dec 13 05:23:25.006141 kubelet[2799]: I1213 05:23:25.004975 2799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ds29r" podStartSLOduration=11.004949483 podStartE2EDuration="11.004949483s" podCreationTimestamp="2024-12-13 05:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:23:20.172314094 +0000 UTC m=+152.887505024" watchObservedRunningTime="2024-12-13 05:23:25.004949483 +0000 UTC m=+157.720140392" Dec 13 05:23:25.092363 systemd-networkd[1430]: lxc_health: Gained IPv6LL Dec 13 05:23:29.467691 systemd[1]: run-containerd-runc-k8s.io-9655ba5b7fc7e66f3e2429663edb1372a7d7b413d92cfaaf8a76f92ee78fe182-runc.pS7KGy.mount: Deactivated successfully. Dec 13 05:23:29.698287 sshd[4729]: pam_unix(sshd:session): session closed for user core Dec 13 05:23:29.703694 systemd[1]: sshd@40-10.244.19.70:22-147.75.109.163:57898.service: Deactivated successfully. Dec 13 05:23:29.707841 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 05:23:29.710990 systemd-logind[1485]: Session 30 logged out. Waiting for processes to exit. Dec 13 05:23:29.713360 systemd-logind[1485]: Removed session 30.