Jan 13 20:40:55.921578 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:40:55.921610 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:40:55.921621 kernel: BIOS-provided physical RAM map: Jan 13 20:40:55.921631 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:40:55.921638 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:40:55.921645 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:40:55.921654 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 13 20:40:55.921662 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 13 20:40:55.921669 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 20:40:55.921677 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 20:40:55.921685 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:40:55.921692 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:40:55.921702 kernel: NX (Execute Disable) protection: active Jan 13 20:40:55.921709 kernel: APIC: Static calls initialized Jan 13 20:40:55.921719 kernel: SMBIOS 2.8 present. Jan 13 20:40:55.921728 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 13 20:40:55.921736 kernel: Hypervisor detected: KVM Jan 13 20:40:55.921840 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:40:55.921849 kernel: kvm-clock: using sched offset of 3927368884 cycles Jan 13 20:40:55.921859 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:40:55.921867 kernel: tsc: Detected 2294.576 MHz processor Jan 13 20:40:55.921877 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:40:55.921886 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:40:55.921895 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 13 20:40:55.921904 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:40:55.921913 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:40:55.921924 kernel: Using GB pages for direct mapping Jan 13 20:40:55.921933 kernel: ACPI: Early table checksum verification disabled Jan 13 20:40:55.921942 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 13 20:40:55.921951 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:55.921959 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:55.921968 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:55.921977 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 13 20:40:55.921985 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:55.921994 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:55.922005 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:55.922014 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:40:55.922028 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 13 20:40:55.922037 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 13 20:40:55.922046 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 13 20:40:55.922058 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 13 20:40:55.922067 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 13 20:40:55.922079 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 13 20:40:55.922088 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 13 20:40:55.922097 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:40:55.922112 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 20:40:55.922121 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 13 20:40:55.922130 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 13 20:40:55.922139 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 13 20:40:55.922148 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 13 20:40:55.922159 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 13 20:40:55.922170 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 13 20:40:55.922179 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 13 20:40:55.922188 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 13 20:40:55.922196 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 13 20:40:55.922206 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 13 20:40:55.922214 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 13 20:40:55.922223 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 13 20:40:55.922232 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 13 20:40:55.922243 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 13 20:40:55.922252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 20:40:55.922261 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 13 20:40:55.922270 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 13 20:40:55.922280 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 13 20:40:55.922289 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 13 20:40:55.922298 kernel: Zone ranges: Jan 13 20:40:55.922307 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:40:55.922316 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 13 20:40:55.922331 kernel: Normal empty Jan 13 20:40:55.922343 kernel: Movable zone start for each node Jan 13 20:40:55.922355 kernel: Early memory node ranges Jan 13 20:40:55.922364 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:40:55.922374 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 13 20:40:55.922383 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 13 20:40:55.922392 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:40:55.922401 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:40:55.922410 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 13 20:40:55.922419 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:40:55.922430 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:40:55.922439 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:40:55.922448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:40:55.922467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:40:55.922476 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:40:55.922485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:40:55.922494 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:40:55.922503 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:40:55.922512 kernel: TSC deadline timer available Jan 13 20:40:55.922524 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 13 20:40:55.922533 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:40:55.922542 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 20:40:55.922551 kernel: Booting paravirtualized kernel on KVM Jan 13 20:40:55.922561 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:40:55.922570 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 13 20:40:55.922579 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 20:40:55.922588 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 20:40:55.922597 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 13 20:40:55.922608 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:40:55.922617 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:40:55.922628 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:40:55.922637 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:40:55.922646 kernel: random: crng init done Jan 13 20:40:55.922655 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:40:55.922664 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:40:55.922673 kernel: Fallback order for Node 0: 0 Jan 13 20:40:55.922685 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 13 20:40:55.922694 kernel: Policy zone: DMA32 Jan 13 20:40:55.922703 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:40:55.922712 kernel: software IO TLB: area num 16. Jan 13 20:40:55.922721 kernel: Memory: 1899476K/2096616K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 196880K reserved, 0K cma-reserved) Jan 13 20:40:55.922730 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 13 20:40:55.922739 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:40:55.922748 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:40:55.922770 kernel: Dynamic Preempt: voluntary Jan 13 20:40:55.922781 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:40:55.922791 kernel: rcu: RCU event tracing is enabled. Jan 13 20:40:55.922800 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 13 20:40:55.922810 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:40:55.922819 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:40:55.922837 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:40:55.922848 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:40:55.922858 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 13 20:40:55.922867 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 13 20:40:55.922877 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:40:55.922887 kernel: Console: colour VGA+ 80x25 Jan 13 20:40:55.922896 kernel: printk: console [tty0] enabled Jan 13 20:40:55.922908 kernel: printk: console [ttyS0] enabled Jan 13 20:40:55.922918 kernel: ACPI: Core revision 20230628 Jan 13 20:40:55.922927 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:40:55.922937 kernel: x2apic enabled Jan 13 20:40:55.922947 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:40:55.922958 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Jan 13 20:40:55.922968 kernel: Calibrating delay loop (skipped) preset value.. 4589.15 BogoMIPS (lpj=2294576) Jan 13 20:40:55.922978 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:40:55.922988 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 20:40:55.922997 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 20:40:55.923007 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:40:55.923016 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 20:40:55.923025 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 20:40:55.923034 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 13 20:40:55.923044 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:40:55.923056 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 13 20:40:55.923065 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 13 20:40:55.923074 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:40:55.923084 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:40:55.923093 kernel: TAA: Mitigation: Clear CPU buffers Jan 13 20:40:55.923102 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:40:55.923112 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 20:40:55.923121 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:40:55.923131 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:40:55.923140 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:40:55.923150 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 13 20:40:55.923161 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 13 20:40:55.923171 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 13 20:40:55.923180 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 13 20:40:55.923190 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:40:55.923199 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 13 20:40:55.923209 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 13 20:40:55.923218 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 13 20:40:55.923228 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 13 20:40:55.923237 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 13 20:40:55.923247 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:40:55.923256 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:40:55.923267 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:40:55.923277 kernel: landlock: Up and running. Jan 13 20:40:55.923286 kernel: SELinux: Initializing. Jan 13 20:40:55.923296 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:40:55.923305 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:40:55.923315 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Jan 13 20:40:55.923324 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 20:40:55.923334 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 20:40:55.923344 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 20:40:55.923354 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 13 20:40:55.923366 kernel: signal: max sigframe size: 3632 Jan 13 20:40:55.923375 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:40:55.923385 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:40:55.923395 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:40:55.923404 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:40:55.923414 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:40:55.923423 kernel: .... node #0, CPUs: #1 Jan 13 20:40:55.923433 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 13 20:40:55.923443 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:40:55.923461 kernel: smpboot: Max logical packages: 16 Jan 13 20:40:55.923471 kernel: smpboot: Total of 2 processors activated (9178.30 BogoMIPS) Jan 13 20:40:55.923481 kernel: devtmpfs: initialized Jan 13 20:40:55.923491 kernel: x86/mm: Memory block size: 128MB Jan 13 20:40:55.923501 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:40:55.923510 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 13 20:40:55.923520 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:40:55.923530 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:40:55.923539 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:40:55.923551 kernel: audit: type=2000 audit(1736800854.532:1): state=initialized audit_enabled=0 res=1 Jan 13 20:40:55.923561 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:40:55.923571 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:40:55.923580 kernel: cpuidle: using governor menu Jan 13 20:40:55.923590 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:40:55.923599 kernel: dca service started, version 1.12.1 Jan 13 20:40:55.923609 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 20:40:55.923619 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 20:40:55.923628 kernel: PCI: Using configuration type 1 for base access Jan 13 20:40:55.923640 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:40:55.923650 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:40:55.923660 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:40:55.923669 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:40:55.923679 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:40:55.923688 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:40:55.923698 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:40:55.923707 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:40:55.923717 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:40:55.923729 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:40:55.923739 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:40:55.923748 kernel: ACPI: Interpreter enabled Jan 13 20:40:55.925788 kernel: ACPI: PM: (supports S0 S5) Jan 13 20:40:55.925806 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:40:55.925816 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:40:55.925826 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:40:55.925836 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:40:55.925846 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:40:55.926017 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:40:55.926118 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:40:55.926211 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:40:55.926224 kernel: PCI host bridge to bus 0000:00 Jan 13 20:40:55.926321 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:40:55.926406 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:40:55.926504 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:40:55.926585 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 13 20:40:55.926666 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:40:55.926748 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 13 20:40:55.926841 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:40:55.926950 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:40:55.927050 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 13 20:40:55.927147 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 13 20:40:55.927238 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 13 20:40:55.927329 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 13 20:40:55.927420 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:40:55.927535 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:40:55.927654 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 13 20:40:55.929938 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 13 20:40:55.930071 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 13 20:40:55.930176 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 13 20:40:55.930290 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 13 20:40:55.930409 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 13 20:40:55.930513 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 13 20:40:55.930611 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 13 20:40:55.930708 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 13 20:40:55.930826 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 13 20:40:55.930921 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 13 20:40:55.931016 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 13 20:40:55.931108 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 13 20:40:55.931203 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 13 20:40:55.931298 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 13 20:40:55.931400 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:40:55.931502 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 20:40:55.931593 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 13 20:40:55.931706 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 13 20:40:55.931815 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 13 20:40:55.931912 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:40:55.932009 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:40:55.932101 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 13 20:40:55.932191 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 13 20:40:55.932287 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:40:55.932377 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:40:55.932482 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:40:55.932573 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 13 20:40:55.932667 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 13 20:40:55.932785 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:40:55.932880 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 20:40:55.932986 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 13 20:40:55.933081 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 13 20:40:55.933180 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 13 20:40:55.933272 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 13 20:40:55.933364 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 20:40:55.933472 kernel: pci_bus 0000:02: extended config space not accessible Jan 13 20:40:55.933580 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 13 20:40:55.933681 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 13 20:40:55.933876 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 13 20:40:55.933991 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 13 20:40:55.934098 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 13 20:40:55.934193 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 13 20:40:55.934553 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 13 20:40:55.934656 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 13 20:40:55.936793 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 20:40:55.936930 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 13 20:40:55.937039 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 13 20:40:55.937135 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 13 20:40:55.937229 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 13 20:40:55.937320 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 20:40:55.937412 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 13 20:40:55.937510 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 13 20:40:55.937601 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 20:40:55.937692 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 13 20:40:55.937808 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 13 20:40:55.937898 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 20:40:55.937991 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 13 20:40:55.938081 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 13 20:40:55.938171 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 20:40:55.938263 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 13 20:40:55.938352 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 13 20:40:55.938442 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 20:40:55.938547 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 13 20:40:55.938637 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 13 20:40:55.938727 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 20:40:55.938740 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:40:55.938751 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:40:55.941837 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:40:55.941849 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:40:55.941859 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:40:55.941869 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:40:55.941885 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:40:55.941895 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:40:55.941904 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:40:55.941914 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:40:55.941924 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:40:55.941934 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:40:55.941944 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:40:55.941954 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:40:55.941964 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:40:55.941977 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:40:55.941987 kernel: iommu: Default domain type: Translated Jan 13 20:40:55.941997 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:40:55.942007 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:40:55.942017 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:40:55.942026 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:40:55.942036 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 13 20:40:55.942169 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:40:55.942271 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:40:55.942364 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:40:55.942378 kernel: vgaarb: loaded Jan 13 20:40:55.942388 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:40:55.942398 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:40:55.942408 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:40:55.942418 kernel: pnp: PnP ACPI init Jan 13 20:40:55.942525 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 20:40:55.942544 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:40:55.942554 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:40:55.942564 kernel: NET: Registered PF_INET protocol family Jan 13 20:40:55.942574 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:40:55.942584 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 20:40:55.942594 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:40:55.942603 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:40:55.942613 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:40:55.942623 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 20:40:55.942636 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:40:55.942646 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:40:55.942656 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:40:55.942666 kernel: NET: Registered PF_XDP protocol family Jan 13 20:40:55.942769 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 13 20:40:55.942864 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 20:40:55.942961 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 20:40:55.943058 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 20:40:55.943151 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 20:40:55.943244 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 20:40:55.943342 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 20:40:55.943462 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 20:40:55.943556 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 13 20:40:55.943652 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 13 20:40:55.943743 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 13 20:40:55.944859 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 13 20:40:55.944951 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 13 20:40:55.945042 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 13 20:40:55.945133 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 13 20:40:55.945224 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 13 20:40:55.945323 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 13 20:40:55.945417 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 13 20:40:55.945524 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 13 20:40:55.945616 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 13 20:40:55.945708 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 13 20:40:55.945814 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 20:40:55.945906 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 13 20:40:55.946000 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 13 20:40:55.946090 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 13 20:40:55.946181 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 20:40:55.946271 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 13 20:40:55.946362 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 13 20:40:55.946460 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 13 20:40:55.946552 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 20:40:55.946642 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 13 20:40:55.946731 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 13 20:40:55.948847 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 13 20:40:55.948942 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 20:40:55.949050 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 13 20:40:55.949142 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 13 20:40:55.949233 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 13 20:40:55.949324 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 20:40:55.949416 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 13 20:40:55.949516 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 13 20:40:55.949608 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 13 20:40:55.949708 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 20:40:55.949809 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 13 20:40:55.949901 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 13 20:40:55.949992 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 13 20:40:55.950084 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 20:40:55.950181 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 13 20:40:55.950274 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 13 20:40:55.950365 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 13 20:40:55.950465 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 20:40:55.950554 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:40:55.950638 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:40:55.950720 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:40:55.952826 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 13 20:40:55.952912 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 20:40:55.953000 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 13 20:40:55.953095 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 13 20:40:55.953181 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 13 20:40:55.953267 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 20:40:55.953361 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 13 20:40:55.953462 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 13 20:40:55.953554 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 13 20:40:55.953639 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 20:40:55.953733 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 13 20:40:55.953829 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 13 20:40:55.953914 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 20:40:55.954005 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 13 20:40:55.954090 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 13 20:40:55.954184 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 20:40:55.954285 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 13 20:40:55.954372 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 13 20:40:55.954466 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 20:40:55.956897 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 13 20:40:55.957022 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 13 20:40:55.957116 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 20:40:55.957215 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 13 20:40:55.957299 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 13 20:40:55.957383 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 20:40:55.957482 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 13 20:40:55.957568 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 13 20:40:55.957654 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 20:40:55.957669 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:40:55.957684 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:40:55.957694 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 20:40:55.957705 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 13 20:40:55.957716 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:40:55.957727 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Jan 13 20:40:55.957737 kernel: Initialise system trusted keyrings Jan 13 20:40:55.957747 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 20:40:55.957772 kernel: Key type asymmetric registered Jan 13 20:40:55.957786 kernel: Asymmetric key parser 'x509' registered Jan 13 20:40:55.957796 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:40:55.957806 kernel: io scheduler mq-deadline registered Jan 13 20:40:55.957817 kernel: io scheduler kyber registered Jan 13 20:40:55.957827 kernel: io scheduler bfq registered Jan 13 20:40:55.957926 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 13 20:40:55.958021 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 13 20:40:55.958113 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:40:55.958207 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 13 20:40:55.958304 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 13 20:40:55.958396 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:40:55.958498 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 13 20:40:55.958590 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 13 20:40:55.958683 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:40:55.960799 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 13 20:40:55.960905 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 13 20:40:55.960998 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:40:55.961092 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 13 20:40:55.961189 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 13 20:40:55.961282 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:40:55.961378 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 13 20:40:55.961479 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 13 20:40:55.961571 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:40:55.961665 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 13 20:40:55.961766 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 13 20:40:55.961859 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:40:55.961955 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 13 20:40:55.962055 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 13 20:40:55.962257 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:40:55.962291 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:40:55.962319 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:40:55.962346 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:40:55.962370 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:40:55.962394 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:40:55.962408 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:40:55.962418 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:40:55.962429 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:40:55.962440 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:40:55.962545 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 13 20:40:55.964900 kernel: rtc_cmos 00:03: registered as rtc0 Jan 13 20:40:55.965005 kernel: rtc_cmos 00:03: setting system clock to 2025-01-13T20:40:55 UTC (1736800855) Jan 13 20:40:55.965099 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 13 20:40:55.965113 kernel: intel_pstate: CPU model not supported Jan 13 20:40:55.965124 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:40:55.965135 kernel: Segment Routing with IPv6 Jan 13 20:40:55.965145 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:40:55.965156 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:40:55.965166 kernel: Key type dns_resolver registered Jan 13 20:40:55.965177 kernel: IPI shorthand broadcast: enabled Jan 13 20:40:55.965188 kernel: sched_clock: Marking stable (962002338, 122951885)->(1180940173, -95985950) Jan 13 20:40:55.965202 kernel: registered taskstats version 1 Jan 13 20:40:55.965212 kernel: Loading compiled-in X.509 certificates Jan 13 20:40:55.965222 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:40:55.965232 kernel: Key type .fscrypt registered Jan 13 20:40:55.965243 kernel: Key type fscrypt-provisioning registered Jan 13 20:40:55.965253 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:40:55.965263 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:40:55.965274 kernel: ima: No architecture policies found Jan 13 20:40:55.965284 kernel: clk: Disabling unused clocks Jan 13 20:40:55.965297 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:40:55.965308 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:40:55.965318 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:40:55.965329 kernel: Run /init as init process Jan 13 20:40:55.965339 kernel: with arguments: Jan 13 20:40:55.965349 kernel: /init Jan 13 20:40:55.965359 kernel: with environment: Jan 13 20:40:55.965369 kernel: HOME=/ Jan 13 20:40:55.965379 kernel: TERM=linux Jan 13 20:40:55.965389 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:40:55.965405 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:40:55.965419 systemd[1]: Detected virtualization kvm. Jan 13 20:40:55.965431 systemd[1]: Detected architecture x86-64. Jan 13 20:40:55.965441 systemd[1]: Running in initrd. Jan 13 20:40:55.965462 systemd[1]: No hostname configured, using default hostname. Jan 13 20:40:55.965472 systemd[1]: Hostname set to . Jan 13 20:40:55.965486 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:40:55.965497 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:40:55.965508 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:40:55.965519 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:40:55.965530 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:40:55.965542 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:40:55.965553 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:40:55.965564 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:40:55.965579 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:40:55.965590 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:40:55.965601 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:40:55.965612 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:40:55.965623 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:40:55.965634 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:40:55.965645 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:40:55.965656 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:40:55.965669 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:40:55.965680 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:40:55.965691 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:40:55.965702 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:40:55.965714 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:40:55.965724 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:40:55.965735 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:40:55.965746 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:40:55.965771 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:40:55.965782 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:40:55.965793 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:40:55.965804 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:40:55.965815 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:40:55.965826 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:40:55.965836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:55.965847 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:40:55.965858 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:40:55.965901 systemd-journald[201]: Collecting audit messages is disabled. Jan 13 20:40:55.965928 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:40:55.965943 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:40:55.965955 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:40:55.965966 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:40:55.965978 systemd-journald[201]: Journal started Jan 13 20:40:55.966005 systemd-journald[201]: Runtime Journal (/run/log/journal/27758480d4f746dd8b5c24b6aa485e9f) is 4.7M, max 37.9M, 33.2M free. Jan 13 20:40:55.928010 systemd-modules-load[202]: Inserted module 'overlay' Jan 13 20:40:55.996390 kernel: Bridge firewalling registered Jan 13 20:40:55.996417 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:40:55.967097 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 13 20:40:55.999804 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:40:56.000878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:56.007903 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:40:56.010927 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:40:56.012892 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:40:56.016893 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:40:56.027582 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:56.033417 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:56.039908 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:40:56.041826 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:40:56.042996 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:40:56.046464 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:40:56.052549 dracut-cmdline[232]: dracut-dracut-053 Jan 13 20:40:56.057796 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:40:56.081822 systemd-resolved[238]: Positive Trust Anchors: Jan 13 20:40:56.081840 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:40:56.081879 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:40:56.085428 systemd-resolved[238]: Defaulting to hostname 'linux'. Jan 13 20:40:56.086602 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:40:56.088227 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:40:56.159818 kernel: SCSI subsystem initialized Jan 13 20:40:56.169801 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:40:56.180810 kernel: iscsi: registered transport (tcp) Jan 13 20:40:56.203810 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:40:56.203949 kernel: QLogic iSCSI HBA Driver Jan 13 20:40:56.255155 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:40:56.264262 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:40:56.295300 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:40:56.295373 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:40:56.296669 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:40:56.362878 kernel: raid6: avx512x4 gen() 17657 MB/s Jan 13 20:40:56.368805 kernel: raid6: avx512x2 gen() 17623 MB/s Jan 13 20:40:56.385820 kernel: raid6: avx512x1 gen() 17666 MB/s Jan 13 20:40:56.402817 kernel: raid6: avx2x4 gen() 17551 MB/s Jan 13 20:40:56.419830 kernel: raid6: avx2x2 gen() 17541 MB/s Jan 13 20:40:56.436842 kernel: raid6: avx2x1 gen() 13577 MB/s Jan 13 20:40:56.436959 kernel: raid6: using algorithm avx512x1 gen() 17666 MB/s Jan 13 20:40:56.454916 kernel: raid6: .... xor() 19281 MB/s, rmw enabled Jan 13 20:40:56.454999 kernel: raid6: using avx512x2 recovery algorithm Jan 13 20:40:56.476821 kernel: xor: automatically using best checksumming function avx Jan 13 20:40:56.638020 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:40:56.650567 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:40:56.655901 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:40:56.688906 systemd-udevd[419]: Using default interface naming scheme 'v255'. Jan 13 20:40:56.694146 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:40:56.704969 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:40:56.724354 dracut-pre-trigger[431]: rd.md=0: removing MD RAID activation Jan 13 20:40:56.765821 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:40:56.777161 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:40:56.841256 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:40:56.848362 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:40:56.868875 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:40:56.870042 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:40:56.871332 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:40:56.872914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:40:56.880932 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:40:56.901937 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:40:56.940782 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 13 20:40:56.977869 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 13 20:40:56.977994 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:40:56.978010 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:40:56.978024 kernel: GPT:17805311 != 125829119 Jan 13 20:40:56.978036 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:40:56.978049 kernel: GPT:17805311 != 125829119 Jan 13 20:40:56.978061 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:40:56.978082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:40:56.960188 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:40:56.960308 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:56.960881 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:40:56.961256 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:40:56.984370 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:40:56.984400 kernel: AES CTR mode by8 optimization enabled Jan 13 20:40:56.961370 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:56.961782 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:56.969004 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:57.080100 kernel: ACPI: bus type USB registered Jan 13 20:40:57.080144 kernel: usbcore: registered new interface driver usbfs Jan 13 20:40:57.080159 kernel: usbcore: registered new interface driver hub Jan 13 20:40:57.080172 kernel: usbcore: registered new device driver usb Jan 13 20:40:57.080184 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 13 20:40:57.080391 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 13 20:40:57.080514 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (477) Jan 13 20:40:57.080528 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 20:40:57.080640 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 13 20:40:57.080774 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 13 20:40:57.080906 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 13 20:40:57.081019 kernel: hub 1-0:1.0: USB hub found Jan 13 20:40:57.081164 kernel: hub 1-0:1.0: 4 ports detected Jan 13 20:40:57.081288 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 20:40:57.081498 kernel: hub 2-0:1.0: USB hub found Jan 13 20:40:57.081640 kernel: hub 2-0:1.0: 4 ports detected Jan 13 20:40:57.082468 kernel: libata version 3.00 loaded. Jan 13 20:40:57.082490 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (474) Jan 13 20:40:57.082503 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:40:57.103457 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:40:57.103479 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:40:57.103614 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:40:57.103729 kernel: scsi host0: ahci Jan 13 20:40:57.103878 kernel: scsi host1: ahci Jan 13 20:40:57.103985 kernel: scsi host2: ahci Jan 13 20:40:57.104094 kernel: scsi host3: ahci Jan 13 20:40:57.104197 kernel: scsi host4: ahci Jan 13 20:40:57.104299 kernel: scsi host5: ahci Jan 13 20:40:57.104409 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 13 20:40:57.104428 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 13 20:40:57.104442 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 13 20:40:57.104454 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 13 20:40:57.104467 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 13 20:40:57.104480 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 13 20:40:57.090344 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:57.102078 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:40:57.113505 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:40:57.118699 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:40:57.123017 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:40:57.123514 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:40:57.132074 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:40:57.134917 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:40:57.141903 disk-uuid[570]: Primary Header is updated. Jan 13 20:40:57.141903 disk-uuid[570]: Secondary Entries is updated. Jan 13 20:40:57.141903 disk-uuid[570]: Secondary Header is updated. Jan 13 20:40:57.149813 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:40:57.153956 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:40:57.179565 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:57.279821 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 20:40:57.420235 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:40:57.420302 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 13 20:40:57.422390 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:40:57.423770 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:40:57.423802 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:40:57.428358 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:40:57.429373 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:40:57.437775 kernel: usbcore: registered new interface driver usbhid Jan 13 20:40:57.437829 kernel: usbhid: USB HID core driver Jan 13 20:40:57.442982 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 13 20:40:57.443041 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 13 20:40:58.163849 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:40:58.167548 disk-uuid[571]: The operation has completed successfully. Jan 13 20:40:58.200390 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:40:58.200509 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:40:58.223895 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:40:58.242898 sh[590]: Success Jan 13 20:40:58.262827 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:40:58.324917 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:40:58.336749 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:40:58.337540 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:40:58.364083 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:40:58.364187 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:58.366165 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:40:58.366233 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:40:58.367117 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:40:58.375578 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:40:58.377733 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:40:58.390053 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:40:58.393989 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:40:58.400883 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:58.400931 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:58.400945 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:40:58.408783 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:40:58.421184 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:40:58.422784 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:58.429016 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:40:58.433938 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:40:58.532844 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:40:58.540992 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:40:58.566028 ignition[670]: Ignition 2.20.0 Jan 13 20:40:58.566045 ignition[670]: Stage: fetch-offline Jan 13 20:40:58.566094 ignition[670]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:58.566103 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:40:58.566207 ignition[670]: parsed url from cmdline: "" Jan 13 20:40:58.566211 ignition[670]: no config URL provided Jan 13 20:40:58.566216 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:40:58.571029 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:40:58.566223 ignition[670]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:40:58.566228 ignition[670]: failed to fetch config: resource requires networking Jan 13 20:40:58.567639 ignition[670]: Ignition finished successfully Jan 13 20:40:58.576349 systemd-networkd[780]: lo: Link UP Jan 13 20:40:58.576363 systemd-networkd[780]: lo: Gained carrier Jan 13 20:40:58.577820 systemd-networkd[780]: Enumeration completed Jan 13 20:40:58.577922 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:40:58.578563 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:40:58.578567 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:40:58.579484 systemd-networkd[780]: eth0: Link UP Jan 13 20:40:58.579488 systemd-networkd[780]: eth0: Gained carrier Jan 13 20:40:58.579496 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:40:58.580058 systemd[1]: Reached target network.target - Network. Jan 13 20:40:58.587998 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:40:58.592862 systemd-networkd[780]: eth0: DHCPv4 address 10.244.100.150/30, gateway 10.244.100.149 acquired from 10.244.100.149 Jan 13 20:40:58.611295 ignition[783]: Ignition 2.20.0 Jan 13 20:40:58.611309 ignition[783]: Stage: fetch Jan 13 20:40:58.611569 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:58.611589 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:40:58.611696 ignition[783]: parsed url from cmdline: "" Jan 13 20:40:58.611701 ignition[783]: no config URL provided Jan 13 20:40:58.611706 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:40:58.611714 ignition[783]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:40:58.611862 ignition[783]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 20:40:58.611887 ignition[783]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 20:40:58.611931 ignition[783]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 20:40:58.630179 ignition[783]: GET result: OK Jan 13 20:40:58.630533 ignition[783]: parsing config with SHA512: 9d312f408d9bb8b190caad516aab1db77d9d8c5fe79a6cc8f0c2085589f3afc1b0f21498ec3a80470f300ed050a06e045bfc3c8a65162dc90ed679b10426272d Jan 13 20:40:58.644088 unknown[783]: fetched base config from "system" Jan 13 20:40:58.645470 ignition[783]: fetch: fetch complete Jan 13 20:40:58.644114 unknown[783]: fetched base config from "system" Jan 13 20:40:58.645509 ignition[783]: fetch: fetch passed Jan 13 20:40:58.644129 unknown[783]: fetched user config from "openstack" Jan 13 20:40:58.645620 ignition[783]: Ignition finished successfully Jan 13 20:40:58.648784 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:40:58.660097 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:40:58.684211 ignition[790]: Ignition 2.20.0 Jan 13 20:40:58.684240 ignition[790]: Stage: kargs Jan 13 20:40:58.684652 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:58.684675 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:40:58.687471 ignition[790]: kargs: kargs passed Jan 13 20:40:58.687583 ignition[790]: Ignition finished successfully Jan 13 20:40:58.689879 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:40:58.699947 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:40:58.715957 ignition[796]: Ignition 2.20.0 Jan 13 20:40:58.715972 ignition[796]: Stage: disks Jan 13 20:40:58.716152 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:58.716165 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:40:58.718057 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:40:58.717096 ignition[796]: disks: disks passed Jan 13 20:40:58.717148 ignition[796]: Ignition finished successfully Jan 13 20:40:58.719478 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:40:58.720284 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:40:58.721031 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:40:58.721838 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:40:58.722654 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:40:58.727927 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:40:58.743443 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:40:58.746626 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:40:58.751883 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:40:58.850010 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:40:58.850686 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:40:58.851795 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:40:58.862017 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:40:58.866020 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:40:58.866805 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:40:58.869968 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 20:40:58.870576 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:40:58.870615 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:40:58.879798 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) Jan 13 20:40:58.887954 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:58.888036 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:58.888051 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:40:58.893830 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:40:58.895100 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:40:58.897075 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:40:58.907152 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:40:58.960345 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:40:58.970981 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:40:58.980784 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:40:58.992342 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:40:59.122553 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:40:59.132994 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:40:59.139880 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:40:59.153840 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:59.180355 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:40:59.182199 ignition[928]: INFO : Ignition 2.20.0 Jan 13 20:40:59.182199 ignition[928]: INFO : Stage: mount Jan 13 20:40:59.183187 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:59.183187 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:40:59.184254 ignition[928]: INFO : mount: mount passed Jan 13 20:40:59.184254 ignition[928]: INFO : Ignition finished successfully Jan 13 20:40:59.184469 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:40:59.363667 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:41:00.338180 systemd-networkd[780]: eth0: Gained IPv6LL Jan 13 20:41:01.849553 systemd-networkd[780]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:1925:24:19ff:fef4:6496/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:1925:24:19ff:fef4:6496/64 assigned by NDisc. Jan 13 20:41:01.849567 systemd-networkd[780]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 13 20:41:06.034360 coreos-metadata[814]: Jan 13 20:41:06.034 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:41:06.058505 coreos-metadata[814]: Jan 13 20:41:06.058 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:41:06.072402 coreos-metadata[814]: Jan 13 20:41:06.072 INFO Fetch successful Jan 13 20:41:06.073441 coreos-metadata[814]: Jan 13 20:41:06.072 INFO wrote hostname srv-rxqun.gb1.brightbox.com to /sysroot/etc/hostname Jan 13 20:41:06.075615 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 20:41:06.075844 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 20:41:06.087912 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:41:06.098873 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:41:06.112812 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (946) Jan 13 20:41:06.114835 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:41:06.114855 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:41:06.115912 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:41:06.120069 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:41:06.122133 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:41:06.147175 ignition[963]: INFO : Ignition 2.20.0 Jan 13 20:41:06.147175 ignition[963]: INFO : Stage: files Jan 13 20:41:06.148208 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:41:06.148208 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:41:06.149166 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:41:06.149654 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:41:06.149654 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:41:06.152492 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:41:06.153202 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:41:06.154348 unknown[963]: wrote ssh authorized keys file for user: core Jan 13 20:41:06.156015 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:41:06.156964 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:41:06.161048 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:41:06.405792 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:41:06.851388 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:41:06.851388 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:41:06.854796 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 20:41:07.503431 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:41:08.043910 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:41:08.044845 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:41:08.044845 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:41:08.044845 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:41:08.044845 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:41:08.044845 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:41:08.044845 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:41:08.044845 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:41:08.044845 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:41:08.050191 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:41:08.050191 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:41:08.050191 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:41:08.050191 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:41:08.050191 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:41:08.050191 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:41:08.659707 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:41:11.153722 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:41:11.153722 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:41:11.161053 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:41:11.161053 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:41:11.161053 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:41:11.161053 ignition[963]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:41:11.161053 ignition[963]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:41:11.161053 ignition[963]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:41:11.161053 ignition[963]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:41:11.161053 ignition[963]: INFO : files: files passed Jan 13 20:41:11.161053 ignition[963]: INFO : Ignition finished successfully Jan 13 20:41:11.161406 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:41:11.172032 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:41:11.175922 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:41:11.179056 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:41:11.179176 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:41:11.193115 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:41:11.193115 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:41:11.196869 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:41:11.198429 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:41:11.199510 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:41:11.203926 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:41:11.242248 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:41:11.242382 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:41:11.243484 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:41:11.244138 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:41:11.245223 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:41:11.246445 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:41:11.285347 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:41:11.292008 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:41:11.312048 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:41:11.313412 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:41:11.313969 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:41:11.314961 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:41:11.315103 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:41:11.316295 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:41:11.316823 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:41:11.317826 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:41:11.318831 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:41:11.319684 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:41:11.320772 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:41:11.322485 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:41:11.323934 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:41:11.325144 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:41:11.326309 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:41:11.327456 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:41:11.327837 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:41:11.329507 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:41:11.330865 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:41:11.331605 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:41:11.332523 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:41:11.333744 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:41:11.333941 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:41:11.335172 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:41:11.335331 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:41:11.339731 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:41:11.339888 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:41:11.346009 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:41:11.346582 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:41:11.346732 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:41:11.350992 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:41:11.351422 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:41:11.351542 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:41:11.352301 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:41:11.352411 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:41:11.363946 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:41:11.364058 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:41:11.375007 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:41:11.377750 ignition[1017]: INFO : Ignition 2.20.0 Jan 13 20:41:11.378384 ignition[1017]: INFO : Stage: umount Jan 13 20:41:11.378384 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:41:11.378384 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:41:11.380465 ignition[1017]: INFO : umount: umount passed Jan 13 20:41:11.380465 ignition[1017]: INFO : Ignition finished successfully Jan 13 20:41:11.379629 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:41:11.379751 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:41:11.381286 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:41:11.381396 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:41:11.383271 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:41:11.383368 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:41:11.384226 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:41:11.384277 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:41:11.384929 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:41:11.384967 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:41:11.385635 systemd[1]: Stopped target network.target - Network. Jan 13 20:41:11.386383 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:41:11.386430 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:41:11.387149 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:41:11.387833 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:41:11.389822 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:41:11.390378 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:41:11.391173 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:41:11.392075 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:41:11.392117 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:41:11.392738 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:41:11.392829 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:41:11.393435 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:41:11.393478 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:41:11.394125 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:41:11.394163 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:41:11.394829 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:41:11.394870 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:41:11.395689 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:41:11.397200 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:41:11.399858 systemd-networkd[780]: eth0: DHCPv6 lease lost Jan 13 20:41:11.401087 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:41:11.402326 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:41:11.403204 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:41:11.403316 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:41:11.406255 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:41:11.406781 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:41:11.411883 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:41:11.412261 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:41:11.412310 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:41:11.412743 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:41:11.412795 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:41:11.415059 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:41:11.415101 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:41:11.415669 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:41:11.415704 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:41:11.416197 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:41:11.431261 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:41:11.431546 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:41:11.435219 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:41:11.435452 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:41:11.436981 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:41:11.437091 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:41:11.437557 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:41:11.437592 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:41:11.438560 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:41:11.438606 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:41:11.440116 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:41:11.440160 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:41:11.441268 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:41:11.441308 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:41:11.446981 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:41:11.447418 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:41:11.447471 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:41:11.448325 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:41:11.448385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:41:11.456854 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:41:11.456960 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:41:11.458982 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:41:11.463975 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:41:11.474595 systemd[1]: Switching root. Jan 13 20:41:11.504136 systemd-journald[201]: Journal stopped Jan 13 20:41:12.485846 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 13 20:41:12.485948 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:41:12.485968 kernel: SELinux: policy capability open_perms=1 Jan 13 20:41:12.485991 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:41:12.486005 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:41:12.486018 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:41:12.486032 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:41:12.486045 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:41:12.486063 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:41:12.486077 kernel: audit: type=1403 audit(1736800871.614:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:41:12.486096 systemd[1]: Successfully loaded SELinux policy in 38.956ms. Jan 13 20:41:12.486150 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.873ms. Jan 13 20:41:12.486167 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:41:12.486186 systemd[1]: Detected virtualization kvm. Jan 13 20:41:12.486201 systemd[1]: Detected architecture x86-64. Jan 13 20:41:12.486215 systemd[1]: Detected first boot. Jan 13 20:41:12.486229 systemd[1]: Hostname set to . Jan 13 20:41:12.486244 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:41:12.486258 zram_generator::config[1060]: No configuration found. Jan 13 20:41:12.486290 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:41:12.486305 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:41:12.486319 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:41:12.486336 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:41:12.486351 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:41:12.486366 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:41:12.486384 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:41:12.486398 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:41:12.486413 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:41:12.486431 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:41:12.486446 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:41:12.486461 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:41:12.486475 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:41:12.486489 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:41:12.486504 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:41:12.486518 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:41:12.486533 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:41:12.486551 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:41:12.486582 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:41:12.486596 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:41:12.486612 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:41:12.486626 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:41:12.486647 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:41:12.486663 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:41:12.486679 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:41:12.486697 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:41:12.486712 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:41:12.486726 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:41:12.486741 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:41:12.486773 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:41:12.486793 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:41:12.486810 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:41:12.486825 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:41:12.486840 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:41:12.486859 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:41:12.486874 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:41:12.486888 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:41:12.486903 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:41:12.486917 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:41:12.486935 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:41:12.486949 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:41:12.486965 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:41:12.486979 systemd[1]: Reached target machines.target - Containers. Jan 13 20:41:12.486993 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:41:12.487008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:41:12.487023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:41:12.487038 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:41:12.487056 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:41:12.487077 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:41:12.487092 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:41:12.487106 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:41:12.487120 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:41:12.487135 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:41:12.487154 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:41:12.487168 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:41:12.487189 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:41:12.487204 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:41:12.487219 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:41:12.487233 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:41:12.487247 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:41:12.487261 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:41:12.487275 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:41:12.487296 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:41:12.487311 systemd[1]: Stopped verity-setup.service. Jan 13 20:41:12.487329 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:41:12.487343 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:41:12.487358 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:41:12.487373 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:41:12.487388 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:41:12.487410 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:41:12.487425 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:41:12.487441 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:41:12.487462 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:41:12.487476 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:41:12.487491 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:41:12.487505 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:41:12.487520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:41:12.487541 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:41:12.487562 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:41:12.487577 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:41:12.487622 systemd-journald[1149]: Collecting audit messages is disabled. Jan 13 20:41:12.487660 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:41:12.487675 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:41:12.487690 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:41:12.487704 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:41:12.487719 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:41:12.487733 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:41:12.487749 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:41:12.490850 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:41:12.490883 systemd-journald[1149]: Journal started Jan 13 20:41:12.490921 systemd-journald[1149]: Runtime Journal (/run/log/journal/27758480d4f746dd8b5c24b6aa485e9f) is 4.7M, max 37.9M, 33.2M free. Jan 13 20:41:12.190511 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:41:12.205612 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:41:12.206084 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:41:12.496771 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:41:12.503512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:41:12.512424 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:41:12.512510 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:41:12.521780 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:41:12.521869 kernel: loop: module loaded Jan 13 20:41:12.528303 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:41:12.535773 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:41:12.542232 kernel: fuse: init (API version 7.39) Jan 13 20:41:12.551050 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:41:12.552713 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:41:12.553549 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:41:12.553837 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:41:12.555416 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:41:12.558967 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:41:12.559534 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:41:12.587847 kernel: loop0: detected capacity change from 0 to 8 Jan 13 20:41:12.594801 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:41:12.601968 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:41:12.609122 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:41:12.627386 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:41:12.629139 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:41:12.629633 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:41:12.633210 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:41:12.636023 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:41:12.649885 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:41:12.678700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:41:12.681786 kernel: loop1: detected capacity change from 0 to 138184 Jan 13 20:41:12.687782 kernel: ACPI: bus type drm_connector registered Jan 13 20:41:12.686952 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:41:12.688872 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:41:12.692335 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:41:12.692742 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:41:12.711079 systemd-journald[1149]: Time spent on flushing to /var/log/journal/27758480d4f746dd8b5c24b6aa485e9f is 63.546ms for 1163 entries. Jan 13 20:41:12.711079 systemd-journald[1149]: System Journal (/var/log/journal/27758480d4f746dd8b5c24b6aa485e9f) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:41:12.805451 systemd-journald[1149]: Received client request to flush runtime journal. Jan 13 20:41:12.805597 kernel: loop2: detected capacity change from 0 to 211296 Jan 13 20:41:12.805669 kernel: loop3: detected capacity change from 0 to 141000 Jan 13 20:41:12.779204 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:41:12.790094 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:41:12.821447 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:41:12.840785 kernel: loop4: detected capacity change from 0 to 8 Jan 13 20:41:12.857780 kernel: loop5: detected capacity change from 0 to 138184 Jan 13 20:41:12.887791 kernel: loop6: detected capacity change from 0 to 211296 Jan 13 20:41:12.888698 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:41:12.896574 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:41:12.904775 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 13 20:41:12.905793 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 13 20:41:12.908850 udevadm[1217]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:41:12.926828 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:41:12.940225 kernel: loop7: detected capacity change from 0 to 141000 Jan 13 20:41:12.952254 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 20:41:12.952795 (sd-merge)[1215]: Merged extensions into '/usr'. Jan 13 20:41:12.957932 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:41:12.957950 systemd[1]: Reloading... Jan 13 20:41:13.081778 zram_generator::config[1244]: No configuration found. Jan 13 20:41:13.134860 ldconfig[1167]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:41:13.276916 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:41:13.331789 systemd[1]: Reloading finished in 373 ms. Jan 13 20:41:13.361276 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:41:13.362387 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:41:13.376194 systemd[1]: Starting ensure-sysext.service... Jan 13 20:41:13.379571 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:41:13.400832 systemd[1]: Reloading requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:41:13.400866 systemd[1]: Reloading... Jan 13 20:41:13.452695 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:41:13.457128 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:41:13.461954 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:41:13.462289 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 13 20:41:13.462355 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 13 20:41:13.468778 zram_generator::config[1324]: No configuration found. Jan 13 20:41:13.481255 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:41:13.481269 systemd-tmpfiles[1301]: Skipping /boot Jan 13 20:41:13.517645 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:41:13.517660 systemd-tmpfiles[1301]: Skipping /boot Jan 13 20:41:13.656694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:41:13.709062 systemd[1]: Reloading finished in 307 ms. Jan 13 20:41:13.723623 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:41:13.730088 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:41:13.746945 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:41:13.749944 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:41:13.762049 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:41:13.766336 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:41:13.774970 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:41:13.777983 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:41:13.784775 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:41:13.784988 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:41:13.792314 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:41:13.798117 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:41:13.801061 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:41:13.801599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:41:13.801728 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:41:13.806084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:41:13.806260 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:41:13.809747 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:41:13.810089 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:41:13.810263 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:41:13.818098 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:41:13.818806 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:41:13.826482 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:41:13.826812 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:41:13.835099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:41:13.846471 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:41:13.847045 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:41:13.847234 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:41:13.848501 systemd[1]: Finished ensure-sysext.service. Jan 13 20:41:13.850132 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:41:13.851278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:41:13.852079 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:41:13.853430 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:41:13.853899 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:41:13.857199 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:41:13.868490 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:41:13.868658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:41:13.877898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:41:13.878475 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:41:13.882686 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:41:13.882752 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:41:13.894977 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:41:13.899393 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:41:13.902200 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:41:13.903079 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:41:13.915668 systemd-udevd[1395]: Using default interface naming scheme 'v255'. Jan 13 20:41:13.921233 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:41:13.943635 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:41:13.946017 augenrules[1432]: No rules Jan 13 20:41:13.944475 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:41:13.944900 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:41:13.949207 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:41:13.960043 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:41:14.068629 systemd-networkd[1443]: lo: Link UP Jan 13 20:41:14.068641 systemd-networkd[1443]: lo: Gained carrier Jan 13 20:41:14.069321 systemd-networkd[1443]: Enumeration completed Jan 13 20:41:14.069422 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:41:14.084056 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:41:14.104343 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:41:14.105495 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:41:14.111470 systemd-resolved[1394]: Positive Trust Anchors: Jan 13 20:41:14.111511 systemd-resolved[1394]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:41:14.111556 systemd-resolved[1394]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:41:14.117022 systemd-resolved[1394]: Using system hostname 'srv-rxqun.gb1.brightbox.com'. Jan 13 20:41:14.119473 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:41:14.120070 systemd[1]: Reached target network.target - Network. Jan 13 20:41:14.120623 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:41:14.139686 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:41:14.173813 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1454) Jan 13 20:41:14.225078 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:41:14.229856 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 20:41:14.233379 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:41:14.233387 systemd-networkd[1443]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:41:14.234222 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:41:14.238854 systemd-networkd[1443]: eth0: Link UP Jan 13 20:41:14.238989 systemd-networkd[1443]: eth0: Gained carrier Jan 13 20:41:14.239329 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:41:14.241777 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:41:14.241969 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:41:14.258937 systemd-networkd[1443]: eth0: DHCPv4 address 10.244.100.150/30, gateway 10.244.100.149 acquired from 10.244.100.149 Jan 13 20:41:14.262559 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Jan 13 20:41:14.274542 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:41:14.289787 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:41:14.294742 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:41:14.295996 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:41:14.321798 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 20:41:14.352087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:41:14.492289 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:41:14.515571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:41:14.521993 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:41:14.547555 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:41:14.569256 systemd-timesyncd[1421]: Contacted time server 217.144.90.26:123 (0.flatcar.pool.ntp.org). Jan 13 20:41:14.569340 systemd-timesyncd[1421]: Initial clock synchronization to Mon 2025-01-13 20:41:14.692704 UTC. Jan 13 20:41:14.575172 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:41:14.576467 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:41:14.576948 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:41:14.577472 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:41:14.577950 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:41:14.578589 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:41:14.579180 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:41:14.579618 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:41:14.580215 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:41:14.580254 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:41:14.580605 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:41:14.582100 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:41:14.584041 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:41:14.590188 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:41:14.594577 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:41:14.597165 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:41:14.597842 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:41:14.598355 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:41:14.598923 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:41:14.598962 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:41:14.600932 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:41:14.603961 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:41:14.611773 lvm[1483]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:41:14.611906 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:41:14.613884 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:41:14.620966 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:41:14.621429 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:41:14.627953 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:41:14.638907 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:41:14.642931 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:41:14.649983 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:41:14.660942 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:41:14.661926 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:41:14.677337 dbus-daemon[1486]: [system] SELinux support is enabled Jan 13 20:41:14.678197 extend-filesystems[1488]: Found loop4 Jan 13 20:41:14.678197 extend-filesystems[1488]: Found loop5 Jan 13 20:41:14.678197 extend-filesystems[1488]: Found loop6 Jan 13 20:41:14.678197 extend-filesystems[1488]: Found loop7 Jan 13 20:41:14.678197 extend-filesystems[1488]: Found vda Jan 13 20:41:14.678197 extend-filesystems[1488]: Found vda1 Jan 13 20:41:14.678197 extend-filesystems[1488]: Found vda2 Jan 13 20:41:14.678197 extend-filesystems[1488]: Found vda3 Jan 13 20:41:14.678197 extend-filesystems[1488]: Found usr Jan 13 20:41:14.678197 extend-filesystems[1488]: Found vda4 Jan 13 20:41:14.678197 extend-filesystems[1488]: Found vda6 Jan 13 20:41:14.678197 extend-filesystems[1488]: Found vda7 Jan 13 20:41:14.678197 extend-filesystems[1488]: Found vda9 Jan 13 20:41:14.678197 extend-filesystems[1488]: Checking size of /dev/vda9 Jan 13 20:41:14.664068 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:41:14.713818 jq[1487]: false Jan 13 20:41:14.689876 dbus-daemon[1486]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1443 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:41:14.666916 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:41:14.703995 dbus-daemon[1486]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:41:14.674903 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:41:14.677829 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:41:14.684838 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:41:14.691114 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:41:14.691327 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:41:14.708067 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:41:14.708097 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:41:14.721308 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:41:14.730783 jq[1498]: true Jan 13 20:41:14.736906 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:41:14.737389 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:41:14.737417 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:41:14.738433 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:41:14.738629 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:41:14.748153 extend-filesystems[1488]: Resized partition /dev/vda9 Jan 13 20:41:14.747307 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:41:14.747814 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:41:14.759783 extend-filesystems[1522]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:41:14.770157 tar[1504]: linux-amd64/helm Jan 13 20:41:14.791831 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 13 20:41:14.791919 update_engine[1497]: I20250113 20:41:14.776597 1497 main.cc:92] Flatcar Update Engine starting Jan 13 20:41:14.798783 update_engine[1497]: I20250113 20:41:14.796422 1497 update_check_scheduler.cc:74] Next update check in 9m14s Jan 13 20:41:14.797501 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:41:14.807338 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:41:14.818779 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1453) Jan 13 20:41:14.819152 jq[1521]: true Jan 13 20:41:14.973035 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 13 20:41:14.980781 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:41:14.980781 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 13 20:41:14.980781 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 13 20:41:14.988859 bash[1548]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:41:14.982445 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:41:14.989043 extend-filesystems[1488]: Resized filesystem in /dev/vda9 Jan 13 20:41:14.983824 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:41:14.986370 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:41:14.998986 systemd[1]: Starting sshkeys.service... Jan 13 20:41:15.020935 systemd-logind[1496]: Watching system buttons on /dev/input/event2 (Power Button) Jan 13 20:41:15.023835 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:41:15.026992 systemd-logind[1496]: New seat seat0. Jan 13 20:41:15.028557 dbus-daemon[1486]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:41:15.028718 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:41:15.030066 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:41:15.035691 dbus-daemon[1486]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1520 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:41:15.046221 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:41:15.051693 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:41:15.063594 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:41:15.102561 polkitd[1553]: Started polkitd version 121 Jan 13 20:41:15.130392 polkitd[1553]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:41:15.130471 polkitd[1553]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:41:15.135441 polkitd[1553]: Finished loading, compiling and executing 2 rules Jan 13 20:41:15.137530 dbus-daemon[1486]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:41:15.137784 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:41:15.139413 polkitd[1553]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:41:15.165733 locksmithd[1527]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:41:15.173527 systemd-hostnamed[1520]: Hostname set to (static) Jan 13 20:41:15.207438 containerd[1511]: time="2025-01-13T20:41:15.207335842Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:41:15.260789 containerd[1511]: time="2025-01-13T20:41:15.260466890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:41:15.265803 containerd[1511]: time="2025-01-13T20:41:15.264031348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:41:15.265803 containerd[1511]: time="2025-01-13T20:41:15.264072021Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:41:15.265803 containerd[1511]: time="2025-01-13T20:41:15.264106253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:41:15.265803 containerd[1511]: time="2025-01-13T20:41:15.264281724Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:41:15.265803 containerd[1511]: time="2025-01-13T20:41:15.264297484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:41:15.265803 containerd[1511]: time="2025-01-13T20:41:15.264355459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:41:15.265803 containerd[1511]: time="2025-01-13T20:41:15.264367908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:41:15.265803 containerd[1511]: time="2025-01-13T20:41:15.264541352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:41:15.265803 containerd[1511]: time="2025-01-13T20:41:15.264555389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:41:15.265803 containerd[1511]: time="2025-01-13T20:41:15.264569000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:41:15.265803 containerd[1511]: time="2025-01-13T20:41:15.264578382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:41:15.266178 containerd[1511]: time="2025-01-13T20:41:15.264646686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:41:15.266178 containerd[1511]: time="2025-01-13T20:41:15.264880143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:41:15.266178 containerd[1511]: time="2025-01-13T20:41:15.264995912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:41:15.266178 containerd[1511]: time="2025-01-13T20:41:15.265009456Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:41:15.266178 containerd[1511]: time="2025-01-13T20:41:15.265084994Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:41:15.266178 containerd[1511]: time="2025-01-13T20:41:15.265139341Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:41:15.271230 containerd[1511]: time="2025-01-13T20:41:15.271202723Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:41:15.271289 containerd[1511]: time="2025-01-13T20:41:15.271253173Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:41:15.271333 containerd[1511]: time="2025-01-13T20:41:15.271290190Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:41:15.271333 containerd[1511]: time="2025-01-13T20:41:15.271317410Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:41:15.271383 containerd[1511]: time="2025-01-13T20:41:15.271355846Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:41:15.271526 containerd[1511]: time="2025-01-13T20:41:15.271510510Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:41:15.275033 containerd[1511]: time="2025-01-13T20:41:15.274990646Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:41:15.275281 containerd[1511]: time="2025-01-13T20:41:15.275251804Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:41:15.275315 containerd[1511]: time="2025-01-13T20:41:15.275286301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:41:15.275315 containerd[1511]: time="2025-01-13T20:41:15.275302743Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:41:15.275383 containerd[1511]: time="2025-01-13T20:41:15.275318557Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:41:15.275383 containerd[1511]: time="2025-01-13T20:41:15.275332045Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:41:15.275383 containerd[1511]: time="2025-01-13T20:41:15.275344955Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:41:15.275383 containerd[1511]: time="2025-01-13T20:41:15.275359461Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:41:15.275383 containerd[1511]: time="2025-01-13T20:41:15.275374853Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:41:15.275501 containerd[1511]: time="2025-01-13T20:41:15.275392533Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:41:15.275501 containerd[1511]: time="2025-01-13T20:41:15.275405730Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:41:15.275501 containerd[1511]: time="2025-01-13T20:41:15.275417569Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:41:15.275501 containerd[1511]: time="2025-01-13T20:41:15.275446248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275501 containerd[1511]: time="2025-01-13T20:41:15.275461011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275501 containerd[1511]: time="2025-01-13T20:41:15.275472977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275501 containerd[1511]: time="2025-01-13T20:41:15.275485374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275501 containerd[1511]: time="2025-01-13T20:41:15.275498422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275692 containerd[1511]: time="2025-01-13T20:41:15.275523056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275692 containerd[1511]: time="2025-01-13T20:41:15.275538679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275692 containerd[1511]: time="2025-01-13T20:41:15.275552116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275692 containerd[1511]: time="2025-01-13T20:41:15.275564990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275692 containerd[1511]: time="2025-01-13T20:41:15.275578872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275692 containerd[1511]: time="2025-01-13T20:41:15.275590353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275692 containerd[1511]: time="2025-01-13T20:41:15.275602768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275692 containerd[1511]: time="2025-01-13T20:41:15.275614719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275692 containerd[1511]: time="2025-01-13T20:41:15.275628777Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:41:15.275692 containerd[1511]: time="2025-01-13T20:41:15.275649276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275692 containerd[1511]: time="2025-01-13T20:41:15.275664265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.275692 containerd[1511]: time="2025-01-13T20:41:15.275676098Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:41:15.275985 containerd[1511]: time="2025-01-13T20:41:15.275718571Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:41:15.275985 containerd[1511]: time="2025-01-13T20:41:15.275737072Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:41:15.275985 containerd[1511]: time="2025-01-13T20:41:15.275747500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:41:15.275985 containerd[1511]: time="2025-01-13T20:41:15.275759609Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:41:15.278778 containerd[1511]: time="2025-01-13T20:41:15.277204677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.278778 containerd[1511]: time="2025-01-13T20:41:15.277231373Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:41:15.278778 containerd[1511]: time="2025-01-13T20:41:15.277250096Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:41:15.278778 containerd[1511]: time="2025-01-13T20:41:15.277261035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:41:15.278887 containerd[1511]: time="2025-01-13T20:41:15.277582714Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:41:15.278887 containerd[1511]: time="2025-01-13T20:41:15.277632233Z" level=info msg="Connect containerd service" Jan 13 20:41:15.278887 containerd[1511]: time="2025-01-13T20:41:15.277673285Z" level=info msg="using legacy CRI server" Jan 13 20:41:15.278887 containerd[1511]: time="2025-01-13T20:41:15.277685670Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:41:15.279265 containerd[1511]: time="2025-01-13T20:41:15.279244805Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:41:15.280958 containerd[1511]: time="2025-01-13T20:41:15.280933201Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:41:15.282788 containerd[1511]: time="2025-01-13T20:41:15.281102746Z" level=info msg="Start subscribing containerd event" Jan 13 20:41:15.282788 containerd[1511]: time="2025-01-13T20:41:15.281161268Z" level=info msg="Start recovering state" Jan 13 20:41:15.282788 containerd[1511]: time="2025-01-13T20:41:15.281244499Z" level=info msg="Start event monitor" Jan 13 20:41:15.282788 containerd[1511]: time="2025-01-13T20:41:15.281259115Z" level=info msg="Start snapshots syncer" Jan 13 20:41:15.282788 containerd[1511]: time="2025-01-13T20:41:15.281268473Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:41:15.282788 containerd[1511]: time="2025-01-13T20:41:15.281276988Z" level=info msg="Start streaming server" Jan 13 20:41:15.282788 containerd[1511]: time="2025-01-13T20:41:15.281374131Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:41:15.282788 containerd[1511]: time="2025-01-13T20:41:15.281419428Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:41:15.281569 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:41:15.284809 containerd[1511]: time="2025-01-13T20:41:15.284085893Z" level=info msg="containerd successfully booted in 0.077892s" Jan 13 20:41:15.377912 systemd-networkd[1443]: eth0: Gained IPv6LL Jan 13 20:41:15.382985 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:41:15.387498 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:41:15.398054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:41:15.401344 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:41:15.457916 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:41:15.528833 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:41:15.583191 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:41:15.594997 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:41:15.606093 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:41:15.607273 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:41:15.618920 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:41:15.639117 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:41:15.649413 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:41:15.657668 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:41:15.658536 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:41:15.726998 tar[1504]: linux-amd64/LICENSE Jan 13 20:41:15.726998 tar[1504]: linux-amd64/README.md Jan 13 20:41:15.738238 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:41:16.057870 systemd-networkd[1443]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:1925:24:19ff:fef4:6496/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:1925:24:19ff:fef4:6496/64 assigned by NDisc. Jan 13 20:41:16.057880 systemd-networkd[1443]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 13 20:41:16.233115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:41:16.253453 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:41:16.912587 kubelet[1611]: E0113 20:41:16.912460 1611 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:41:16.915313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:41:16.915523 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:41:16.916039 systemd[1]: kubelet.service: Consumed 1.121s CPU time. Jan 13 20:41:20.715417 agetty[1600]: failed to open credentials directory Jan 13 20:41:20.715530 agetty[1598]: failed to open credentials directory Jan 13 20:41:20.740684 login[1598]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:41:20.746909 login[1600]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:41:20.757718 systemd-logind[1496]: New session 1 of user core. Jan 13 20:41:20.759928 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:41:20.768664 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:41:20.774553 systemd-logind[1496]: New session 2 of user core. Jan 13 20:41:20.795666 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:41:20.803177 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:41:20.808498 (systemd)[1627]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:41:20.925049 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:41:20.925571 systemd[1627]: Queued start job for default target default.target. Jan 13 20:41:20.933305 systemd[1627]: Created slice app.slice - User Application Slice. Jan 13 20:41:20.933340 systemd[1627]: Reached target paths.target - Paths. Jan 13 20:41:20.933356 systemd[1627]: Reached target timers.target - Timers. Jan 13 20:41:20.936063 systemd[1]: Started sshd@0-10.244.100.150:22-139.178.68.195:42688.service - OpenSSH per-connection server daemon (139.178.68.195:42688). Jan 13 20:41:20.937304 systemd[1627]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:41:20.948922 systemd[1627]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:41:20.949040 systemd[1627]: Reached target sockets.target - Sockets. Jan 13 20:41:20.949057 systemd[1627]: Reached target basic.target - Basic System. Jan 13 20:41:20.949096 systemd[1627]: Reached target default.target - Main User Target. Jan 13 20:41:20.949130 systemd[1627]: Startup finished in 133ms. Jan 13 20:41:20.949425 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:41:20.958047 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:41:20.960034 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:41:21.719396 coreos-metadata[1485]: Jan 13 20:41:21.719 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:41:21.737549 coreos-metadata[1485]: Jan 13 20:41:21.737 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 20:41:21.743830 coreos-metadata[1485]: Jan 13 20:41:21.743 INFO Fetch failed with 404: resource not found Jan 13 20:41:21.743830 coreos-metadata[1485]: Jan 13 20:41:21.743 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:41:21.745044 coreos-metadata[1485]: Jan 13 20:41:21.745 INFO Fetch successful Jan 13 20:41:21.745270 coreos-metadata[1485]: Jan 13 20:41:21.745 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 20:41:21.756818 coreos-metadata[1485]: Jan 13 20:41:21.756 INFO Fetch successful Jan 13 20:41:21.757213 coreos-metadata[1485]: Jan 13 20:41:21.757 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 20:41:21.772254 coreos-metadata[1485]: Jan 13 20:41:21.772 INFO Fetch successful Jan 13 20:41:21.772670 coreos-metadata[1485]: Jan 13 20:41:21.772 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 20:41:21.788228 coreos-metadata[1485]: Jan 13 20:41:21.788 INFO Fetch successful Jan 13 20:41:21.788578 coreos-metadata[1485]: Jan 13 20:41:21.788 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 20:41:21.805639 coreos-metadata[1485]: Jan 13 20:41:21.805 INFO Fetch successful Jan 13 20:41:21.840437 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:41:21.841954 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:41:21.857252 sshd[1634]: Accepted publickey for core from 139.178.68.195 port 42688 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:41:21.859751 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:21.867722 systemd-logind[1496]: New session 3 of user core. Jan 13 20:41:21.876372 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:41:22.198196 coreos-metadata[1554]: Jan 13 20:41:22.198 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:41:22.224060 coreos-metadata[1554]: Jan 13 20:41:22.223 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 20:41:22.249879 coreos-metadata[1554]: Jan 13 20:41:22.249 INFO Fetch successful Jan 13 20:41:22.250105 coreos-metadata[1554]: Jan 13 20:41:22.250 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:41:22.283355 coreos-metadata[1554]: Jan 13 20:41:22.283 INFO Fetch successful Jan 13 20:41:22.291005 unknown[1554]: wrote ssh authorized keys file for user: core Jan 13 20:41:22.313871 update-ssh-keys[1670]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:41:22.317306 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:41:22.322825 systemd[1]: Finished sshkeys.service. Jan 13 20:41:22.324694 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:41:22.325207 systemd[1]: Startup finished in 1.101s (kernel) + 15.918s (initrd) + 10.748s (userspace) = 27.768s. Jan 13 20:41:22.631041 systemd[1]: Started sshd@1-10.244.100.150:22-139.178.68.195:42698.service - OpenSSH per-connection server daemon (139.178.68.195:42698). Jan 13 20:41:23.529407 sshd[1676]: Accepted publickey for core from 139.178.68.195 port 42698 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:41:23.532030 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:23.539514 systemd-logind[1496]: New session 4 of user core. Jan 13 20:41:23.541978 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:41:24.159724 sshd[1678]: Connection closed by 139.178.68.195 port 42698 Jan 13 20:41:24.161397 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:24.170527 systemd[1]: sshd@1-10.244.100.150:22-139.178.68.195:42698.service: Deactivated successfully. Jan 13 20:41:24.174054 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:41:24.175087 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:41:24.176147 systemd-logind[1496]: Removed session 4. Jan 13 20:41:24.325232 systemd[1]: Started sshd@2-10.244.100.150:22-139.178.68.195:42708.service - OpenSSH per-connection server daemon (139.178.68.195:42708). Jan 13 20:41:25.240538 sshd[1683]: Accepted publickey for core from 139.178.68.195 port 42708 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:41:25.243886 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:25.254401 systemd-logind[1496]: New session 5 of user core. Jan 13 20:41:25.266068 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:41:25.864798 sshd[1685]: Connection closed by 139.178.68.195 port 42708 Jan 13 20:41:25.864036 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:25.869911 systemd[1]: sshd@2-10.244.100.150:22-139.178.68.195:42708.service: Deactivated successfully. Jan 13 20:41:25.872879 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:41:25.875410 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:41:25.876648 systemd-logind[1496]: Removed session 5. Jan 13 20:41:26.022622 systemd[1]: Started sshd@3-10.244.100.150:22-139.178.68.195:49720.service - OpenSSH per-connection server daemon (139.178.68.195:49720). Jan 13 20:41:26.922725 sshd[1690]: Accepted publickey for core from 139.178.68.195 port 49720 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:41:26.925711 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:26.927485 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:41:26.937098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:41:26.942193 systemd-logind[1496]: New session 6 of user core. Jan 13 20:41:26.946561 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:41:27.060373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:41:27.069299 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:41:27.135436 kubelet[1701]: E0113 20:41:27.135349 1701 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:41:27.141102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:41:27.141247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:41:27.544296 sshd[1695]: Connection closed by 139.178.68.195 port 49720 Jan 13 20:41:27.545103 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:27.549332 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:41:27.550245 systemd[1]: sshd@3-10.244.100.150:22-139.178.68.195:49720.service: Deactivated successfully. Jan 13 20:41:27.552320 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:41:27.554221 systemd-logind[1496]: Removed session 6. Jan 13 20:41:27.704209 systemd[1]: Started sshd@4-10.244.100.150:22-139.178.68.195:49730.service - OpenSSH per-connection server daemon (139.178.68.195:49730). Jan 13 20:41:28.614747 sshd[1713]: Accepted publickey for core from 139.178.68.195 port 49730 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:41:28.616516 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:28.623424 systemd-logind[1496]: New session 7 of user core. Jan 13 20:41:28.629993 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:41:29.111776 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:41:29.112168 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:41:29.126542 sudo[1716]: pam_unix(sudo:session): session closed for user root Jan 13 20:41:29.271007 sshd[1715]: Connection closed by 139.178.68.195 port 49730 Jan 13 20:41:29.272159 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:29.278691 systemd[1]: sshd@4-10.244.100.150:22-139.178.68.195:49730.service: Deactivated successfully. Jan 13 20:41:29.282735 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:41:29.284674 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:41:29.286320 systemd-logind[1496]: Removed session 7. Jan 13 20:41:29.445615 systemd[1]: Started sshd@5-10.244.100.150:22-139.178.68.195:49740.service - OpenSSH per-connection server daemon (139.178.68.195:49740). Jan 13 20:41:30.344371 sshd[1721]: Accepted publickey for core from 139.178.68.195 port 49740 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:41:30.347303 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:30.356258 systemd-logind[1496]: New session 8 of user core. Jan 13 20:41:30.370997 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:41:30.820972 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:41:30.821729 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:41:30.825511 sudo[1725]: pam_unix(sudo:session): session closed for user root Jan 13 20:41:30.831566 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:41:30.831956 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:41:30.847103 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:41:30.878682 augenrules[1747]: No rules Jan 13 20:41:30.880294 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:41:30.880505 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:41:30.882076 sudo[1724]: pam_unix(sudo:session): session closed for user root Jan 13 20:41:31.027208 sshd[1723]: Connection closed by 139.178.68.195 port 49740 Jan 13 20:41:31.027026 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:31.034597 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:41:31.036039 systemd[1]: sshd@5-10.244.100.150:22-139.178.68.195:49740.service: Deactivated successfully. Jan 13 20:41:31.039396 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:41:31.041105 systemd-logind[1496]: Removed session 8. Jan 13 20:41:31.179082 systemd[1]: Started sshd@6-10.244.100.150:22-139.178.68.195:49746.service - OpenSSH per-connection server daemon (139.178.68.195:49746). Jan 13 20:41:32.088179 sshd[1755]: Accepted publickey for core from 139.178.68.195 port 49746 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:41:32.091488 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:32.098907 systemd-logind[1496]: New session 9 of user core. Jan 13 20:41:32.110266 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:41:32.566557 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:41:32.566960 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:41:32.971015 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:41:32.992700 (dockerd)[1776]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:41:33.336443 dockerd[1776]: time="2025-01-13T20:41:33.336132534Z" level=info msg="Starting up" Jan 13 20:41:33.444586 dockerd[1776]: time="2025-01-13T20:41:33.444371393Z" level=info msg="Loading containers: start." Jan 13 20:41:33.643007 kernel: Initializing XFRM netlink socket Jan 13 20:41:33.748147 systemd-networkd[1443]: docker0: Link UP Jan 13 20:41:33.776213 dockerd[1776]: time="2025-01-13T20:41:33.776157450Z" level=info msg="Loading containers: done." Jan 13 20:41:33.792795 dockerd[1776]: time="2025-01-13T20:41:33.792725152Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:41:33.792980 dockerd[1776]: time="2025-01-13T20:41:33.792844176Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:41:33.792980 dockerd[1776]: time="2025-01-13T20:41:33.792957363Z" level=info msg="Daemon has completed initialization" Jan 13 20:41:33.793634 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1372394490-merged.mount: Deactivated successfully. Jan 13 20:41:33.828330 dockerd[1776]: time="2025-01-13T20:41:33.827671622Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:41:33.827935 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:41:35.171851 containerd[1511]: time="2025-01-13T20:41:35.171739777Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:41:36.388651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount787450683.mount: Deactivated successfully. Jan 13 20:41:37.301302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:41:37.310082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:41:37.441983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:41:37.447549 (kubelet)[2034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:41:37.506155 kubelet[2034]: E0113 20:41:37.506093 2034 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:41:37.509267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:41:37.509421 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:41:37.921384 containerd[1511]: time="2025-01-13T20:41:37.921080080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:37.922175 containerd[1511]: time="2025-01-13T20:41:37.922126875Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Jan 13 20:41:37.923107 containerd[1511]: time="2025-01-13T20:41:37.923060923Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:37.930393 containerd[1511]: time="2025-01-13T20:41:37.930288837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:37.933833 containerd[1511]: time="2025-01-13T20:41:37.933350595Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.761503067s" Jan 13 20:41:37.933833 containerd[1511]: time="2025-01-13T20:41:37.933481906Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 20:41:37.964892 containerd[1511]: time="2025-01-13T20:41:37.964857601Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:41:39.967617 containerd[1511]: time="2025-01-13T20:41:39.967557887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:39.968653 containerd[1511]: time="2025-01-13T20:41:39.968614111Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Jan 13 20:41:39.969314 containerd[1511]: time="2025-01-13T20:41:39.969064769Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:39.971643 containerd[1511]: time="2025-01-13T20:41:39.971589018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:39.972838 containerd[1511]: time="2025-01-13T20:41:39.972712439Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.007655858s" Jan 13 20:41:39.972838 containerd[1511]: time="2025-01-13T20:41:39.972742373Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 20:41:40.010901 containerd[1511]: time="2025-01-13T20:41:40.010853602Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:41:41.273817 containerd[1511]: time="2025-01-13T20:41:41.272836508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:41.275888 containerd[1511]: time="2025-01-13T20:41:41.275793316Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Jan 13 20:41:41.276385 containerd[1511]: time="2025-01-13T20:41:41.276163087Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:41.281477 containerd[1511]: time="2025-01-13T20:41:41.281410114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:41.282532 containerd[1511]: time="2025-01-13T20:41:41.282067928Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.270992177s" Jan 13 20:41:41.282532 containerd[1511]: time="2025-01-13T20:41:41.282099902Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 20:41:41.309672 containerd[1511]: time="2025-01-13T20:41:41.309332143Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:41:42.590136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2814612848.mount: Deactivated successfully. Jan 13 20:41:42.975463 containerd[1511]: time="2025-01-13T20:41:42.975336706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:42.976785 containerd[1511]: time="2025-01-13T20:41:42.976670347Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Jan 13 20:41:42.980710 containerd[1511]: time="2025-01-13T20:41:42.979632111Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:42.991584 containerd[1511]: time="2025-01-13T20:41:42.991540061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:42.992411 containerd[1511]: time="2025-01-13T20:41:42.992375997Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.682991037s" Jan 13 20:41:42.992541 containerd[1511]: time="2025-01-13T20:41:42.992520018Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:41:43.016638 containerd[1511]: time="2025-01-13T20:41:43.016568237Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:41:43.638837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3625667530.mount: Deactivated successfully. Jan 13 20:41:44.477750 containerd[1511]: time="2025-01-13T20:41:44.476720744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:44.478369 containerd[1511]: time="2025-01-13T20:41:44.478330109Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 13 20:41:44.478797 containerd[1511]: time="2025-01-13T20:41:44.478778519Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:44.481578 containerd[1511]: time="2025-01-13T20:41:44.481550588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:44.482707 containerd[1511]: time="2025-01-13T20:41:44.482682018Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.465756366s" Jan 13 20:41:44.482823 containerd[1511]: time="2025-01-13T20:41:44.482808438Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:41:44.516589 containerd[1511]: time="2025-01-13T20:41:44.516546281Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:41:45.066505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount53609603.mount: Deactivated successfully. Jan 13 20:41:45.069556 containerd[1511]: time="2025-01-13T20:41:45.069482397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:45.070926 containerd[1511]: time="2025-01-13T20:41:45.070817357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 13 20:41:45.071476 containerd[1511]: time="2025-01-13T20:41:45.071412056Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:45.075939 containerd[1511]: time="2025-01-13T20:41:45.075889756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:45.077036 containerd[1511]: time="2025-01-13T20:41:45.076896041Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 560.103421ms" Jan 13 20:41:45.077036 containerd[1511]: time="2025-01-13T20:41:45.076925749Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:41:45.103933 containerd[1511]: time="2025-01-13T20:41:45.103883489Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:41:45.713663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513453412.mount: Deactivated successfully. Jan 13 20:41:46.071853 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:41:47.553652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:41:47.567164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:41:47.737940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:41:47.752391 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:41:47.842192 kubelet[2182]: E0113 20:41:47.841977 2182 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:41:47.845979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:41:47.846162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:41:50.016113 containerd[1511]: time="2025-01-13T20:41:50.016030366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:50.018214 containerd[1511]: time="2025-01-13T20:41:50.016817330Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 13 20:41:50.018214 containerd[1511]: time="2025-01-13T20:41:50.017205653Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:50.020168 containerd[1511]: time="2025-01-13T20:41:50.020118971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:50.021813 containerd[1511]: time="2025-01-13T20:41:50.021296137Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.917375091s" Jan 13 20:41:50.021813 containerd[1511]: time="2025-01-13T20:41:50.021345252Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 20:41:53.907957 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:41:53.917153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:41:53.955648 systemd[1]: Reloading requested from client PID 2261 ('systemctl') (unit session-9.scope)... Jan 13 20:41:53.955688 systemd[1]: Reloading... Jan 13 20:41:54.089817 zram_generator::config[2300]: No configuration found. Jan 13 20:41:54.251566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:41:54.332042 systemd[1]: Reloading finished in 375 ms. Jan 13 20:41:54.380272 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:41:54.380536 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:41:54.380922 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:41:54.387081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:41:54.552149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:41:54.564331 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:41:54.637297 kubelet[2366]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:41:54.637297 kubelet[2366]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:41:54.637297 kubelet[2366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:41:54.638585 kubelet[2366]: I0113 20:41:54.638502 2366 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:41:55.433295 kubelet[2366]: I0113 20:41:55.433227 2366 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:41:55.433607 kubelet[2366]: I0113 20:41:55.433582 2366 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:41:55.434304 kubelet[2366]: I0113 20:41:55.434272 2366 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:41:55.465895 kubelet[2366]: E0113 20:41:55.465858 2366 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.100.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:55.468636 kubelet[2366]: I0113 20:41:55.468615 2366 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:41:55.485769 kubelet[2366]: I0113 20:41:55.485717 2366 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:41:55.487671 kubelet[2366]: I0113 20:41:55.487638 2366 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:41:55.489095 kubelet[2366]: I0113 20:41:55.489014 2366 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:41:55.489710 kubelet[2366]: I0113 20:41:55.489670 2366 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:41:55.489810 kubelet[2366]: I0113 20:41:55.489729 2366 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:41:55.489970 kubelet[2366]: I0113 20:41:55.489941 2366 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:41:55.492879 kubelet[2366]: I0113 20:41:55.492494 2366 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:41:55.492879 kubelet[2366]: I0113 20:41:55.492533 2366 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:41:55.492879 kubelet[2366]: I0113 20:41:55.492595 2366 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:41:55.492879 kubelet[2366]: I0113 20:41:55.492620 2366 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:41:55.495657 kubelet[2366]: W0113 20:41:55.495026 2366 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.244.100.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rxqun.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:55.495657 kubelet[2366]: E0113 20:41:55.495106 2366 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.100.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rxqun.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:55.495657 kubelet[2366]: W0113 20:41:55.495485 2366 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.244.100.150:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:55.495657 kubelet[2366]: E0113 20:41:55.495523 2366 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.100.150:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:55.496025 kubelet[2366]: I0113 20:41:55.496009 2366 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:41:55.502906 kubelet[2366]: I0113 20:41:55.502849 2366 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:41:55.505110 kubelet[2366]: W0113 20:41:55.505071 2366 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:41:55.507638 kubelet[2366]: I0113 20:41:55.507533 2366 server.go:1256] "Started kubelet" Jan 13 20:41:55.517115 kubelet[2366]: I0113 20:41:55.516454 2366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:41:55.520222 kubelet[2366]: E0113 20:41:55.520038 2366 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.100.150:6443/api/v1/namespaces/default/events\": dial tcp 10.244.100.150:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-rxqun.gb1.brightbox.com.181a5b3b0e62be1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-rxqun.gb1.brightbox.com,UID:srv-rxqun.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-rxqun.gb1.brightbox.com,},FirstTimestamp:2025-01-13 20:41:55.507297822 +0000 UTC m=+0.925681889,LastTimestamp:2025-01-13 20:41:55.507297822 +0000 UTC m=+0.925681889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-rxqun.gb1.brightbox.com,}" Jan 13 20:41:55.524983 kubelet[2366]: I0113 20:41:55.524967 2366 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:41:55.527414 kubelet[2366]: I0113 20:41:55.525945 2366 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:41:55.530337 kubelet[2366]: I0113 20:41:55.525959 2366 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:41:55.530520 kubelet[2366]: I0113 20:41:55.530508 2366 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:41:55.530784 kubelet[2366]: I0113 20:41:55.530771 2366 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:41:55.532683 kubelet[2366]: I0113 20:41:55.525996 2366 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:41:55.532833 kubelet[2366]: I0113 20:41:55.532822 2366 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:41:55.533241 kubelet[2366]: E0113 20:41:55.533224 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.100.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rxqun.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.100.150:6443: connect: connection refused" interval="200ms" Jan 13 20:41:55.535779 kubelet[2366]: W0113 20:41:55.535727 2366 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.244.100.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:55.535899 kubelet[2366]: E0113 20:41:55.535880 2366 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.100.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:55.537315 kubelet[2366]: E0113 20:41:55.537294 2366 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:41:55.541582 kubelet[2366]: I0113 20:41:55.541565 2366 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:41:55.541673 kubelet[2366]: I0113 20:41:55.541666 2366 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:41:55.541833 kubelet[2366]: I0113 20:41:55.541818 2366 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:41:55.542136 kubelet[2366]: I0113 20:41:55.542122 2366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:41:55.544603 kubelet[2366]: I0113 20:41:55.544586 2366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:41:55.544734 kubelet[2366]: I0113 20:41:55.544724 2366 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:41:55.544830 kubelet[2366]: I0113 20:41:55.544821 2366 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:41:55.544931 kubelet[2366]: E0113 20:41:55.544923 2366 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:41:55.554911 kubelet[2366]: W0113 20:41:55.554865 2366 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.244.100.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:55.555058 kubelet[2366]: E0113 20:41:55.555049 2366 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.100.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:55.591420 kubelet[2366]: I0113 20:41:55.591366 2366 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:41:55.591742 kubelet[2366]: I0113 20:41:55.591703 2366 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:41:55.591958 kubelet[2366]: I0113 20:41:55.591939 2366 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:41:55.593333 kubelet[2366]: I0113 20:41:55.593295 2366 policy_none.go:49] "None policy: Start" Jan 13 20:41:55.594371 kubelet[2366]: I0113 20:41:55.594339 2366 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:41:55.594579 kubelet[2366]: I0113 20:41:55.594560 2366 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:41:55.602527 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:41:55.619275 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:41:55.626328 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:41:55.628565 kubelet[2366]: I0113 20:41:55.628510 2366 kubelet_node_status.go:73] "Attempting to register node" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.629806 kubelet[2366]: E0113 20:41:55.629738 2366 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.100.150:6443/api/v1/nodes\": dial tcp 10.244.100.150:6443: connect: connection refused" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.635735 kubelet[2366]: I0113 20:41:55.635536 2366 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:41:55.635911 kubelet[2366]: I0113 20:41:55.635889 2366 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:41:55.638272 kubelet[2366]: E0113 20:41:55.638243 2366 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-rxqun.gb1.brightbox.com\" not found" Jan 13 20:41:55.646266 kubelet[2366]: I0113 20:41:55.646227 2366 topology_manager.go:215] "Topology Admit Handler" podUID="38b809e6f4afdeea20aa27958aa5dd42" podNamespace="kube-system" podName="kube-apiserver-srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.649421 kubelet[2366]: I0113 20:41:55.649368 2366 topology_manager.go:215] "Topology Admit Handler" podUID="372903a90d249851911dab290cc368a5" podNamespace="kube-system" podName="kube-controller-manager-srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.651064 kubelet[2366]: I0113 20:41:55.651037 2366 topology_manager.go:215] "Topology Admit Handler" podUID="2553898a3b509d89950dcc3c092ac175" podNamespace="kube-system" podName="kube-scheduler-srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.659399 systemd[1]: Created slice kubepods-burstable-pod38b809e6f4afdeea20aa27958aa5dd42.slice - libcontainer container kubepods-burstable-pod38b809e6f4afdeea20aa27958aa5dd42.slice. Jan 13 20:41:55.673200 systemd[1]: Created slice kubepods-burstable-pod372903a90d249851911dab290cc368a5.slice - libcontainer container kubepods-burstable-pod372903a90d249851911dab290cc368a5.slice. Jan 13 20:41:55.692684 systemd[1]: Created slice kubepods-burstable-pod2553898a3b509d89950dcc3c092ac175.slice - libcontainer container kubepods-burstable-pod2553898a3b509d89950dcc3c092ac175.slice. Jan 13 20:41:55.734239 kubelet[2366]: I0113 20:41:55.734171 2366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/372903a90d249851911dab290cc368a5-k8s-certs\") pod \"kube-controller-manager-srv-rxqun.gb1.brightbox.com\" (UID: \"372903a90d249851911dab290cc368a5\") " pod="kube-system/kube-controller-manager-srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.734585 kubelet[2366]: E0113 20:41:55.734540 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.100.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rxqun.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.100.150:6443: connect: connection refused" interval="400ms" Jan 13 20:41:55.734782 kubelet[2366]: I0113 20:41:55.734711 2366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/372903a90d249851911dab290cc368a5-kubeconfig\") pod \"kube-controller-manager-srv-rxqun.gb1.brightbox.com\" (UID: \"372903a90d249851911dab290cc368a5\") " pod="kube-system/kube-controller-manager-srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.734999 kubelet[2366]: I0113 20:41:55.734977 2366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/372903a90d249851911dab290cc368a5-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-rxqun.gb1.brightbox.com\" (UID: \"372903a90d249851911dab290cc368a5\") " pod="kube-system/kube-controller-manager-srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.735187 kubelet[2366]: I0113 20:41:55.735168 2366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2553898a3b509d89950dcc3c092ac175-kubeconfig\") pod \"kube-scheduler-srv-rxqun.gb1.brightbox.com\" (UID: \"2553898a3b509d89950dcc3c092ac175\") " pod="kube-system/kube-scheduler-srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.735353 kubelet[2366]: I0113 20:41:55.735337 2366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38b809e6f4afdeea20aa27958aa5dd42-ca-certs\") pod \"kube-apiserver-srv-rxqun.gb1.brightbox.com\" (UID: \"38b809e6f4afdeea20aa27958aa5dd42\") " pod="kube-system/kube-apiserver-srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.735559 kubelet[2366]: I0113 20:41:55.735537 2366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/372903a90d249851911dab290cc368a5-ca-certs\") pod \"kube-controller-manager-srv-rxqun.gb1.brightbox.com\" (UID: \"372903a90d249851911dab290cc368a5\") " pod="kube-system/kube-controller-manager-srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.736003 kubelet[2366]: I0113 20:41:55.735724 2366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/372903a90d249851911dab290cc368a5-flexvolume-dir\") pod \"kube-controller-manager-srv-rxqun.gb1.brightbox.com\" (UID: \"372903a90d249851911dab290cc368a5\") " pod="kube-system/kube-controller-manager-srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.736003 kubelet[2366]: I0113 20:41:55.735836 2366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38b809e6f4afdeea20aa27958aa5dd42-k8s-certs\") pod \"kube-apiserver-srv-rxqun.gb1.brightbox.com\" (UID: \"38b809e6f4afdeea20aa27958aa5dd42\") " pod="kube-system/kube-apiserver-srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.736003 kubelet[2366]: I0113 20:41:55.735900 2366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38b809e6f4afdeea20aa27958aa5dd42-usr-share-ca-certificates\") pod \"kube-apiserver-srv-rxqun.gb1.brightbox.com\" (UID: \"38b809e6f4afdeea20aa27958aa5dd42\") " pod="kube-system/kube-apiserver-srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.833926 kubelet[2366]: I0113 20:41:55.833352 2366 kubelet_node_status.go:73] "Attempting to register node" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.834202 kubelet[2366]: E0113 20:41:55.834062 2366 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.100.150:6443/api/v1/nodes\": dial tcp 10.244.100.150:6443: connect: connection refused" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:41:55.971950 containerd[1511]: time="2025-01-13T20:41:55.971585813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-rxqun.gb1.brightbox.com,Uid:38b809e6f4afdeea20aa27958aa5dd42,Namespace:kube-system,Attempt:0,}" Jan 13 20:41:55.989498 containerd[1511]: time="2025-01-13T20:41:55.989383638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-rxqun.gb1.brightbox.com,Uid:372903a90d249851911dab290cc368a5,Namespace:kube-system,Attempt:0,}" Jan 13 20:41:55.998377 containerd[1511]: time="2025-01-13T20:41:55.998321076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-rxqun.gb1.brightbox.com,Uid:2553898a3b509d89950dcc3c092ac175,Namespace:kube-system,Attempt:0,}" Jan 13 20:41:56.136430 kubelet[2366]: E0113 20:41:56.136091 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.100.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rxqun.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.100.150:6443: connect: connection refused" interval="800ms" Jan 13 20:41:56.176906 kubelet[2366]: E0113 20:41:56.176731 2366 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.100.150:6443/api/v1/namespaces/default/events\": dial tcp 10.244.100.150:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-rxqun.gb1.brightbox.com.181a5b3b0e62be1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-rxqun.gb1.brightbox.com,UID:srv-rxqun.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-rxqun.gb1.brightbox.com,},FirstTimestamp:2025-01-13 20:41:55.507297822 +0000 UTC m=+0.925681889,LastTimestamp:2025-01-13 20:41:55.507297822 +0000 UTC m=+0.925681889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-rxqun.gb1.brightbox.com,}" Jan 13 20:41:56.239357 kubelet[2366]: I0113 20:41:56.239061 2366 kubelet_node_status.go:73] "Attempting to register node" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:41:56.240322 kubelet[2366]: E0113 20:41:56.239862 2366 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.100.150:6443/api/v1/nodes\": dial tcp 10.244.100.150:6443: connect: connection refused" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:41:56.490554 kubelet[2366]: W0113 20:41:56.490213 2366 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.244.100.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rxqun.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:56.490554 kubelet[2366]: E0113 20:41:56.490351 2366 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.100.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rxqun.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:56.553079 kubelet[2366]: W0113 20:41:56.552889 2366 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.244.100.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:56.553079 kubelet[2366]: E0113 20:41:56.553004 2366 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.100.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:56.604705 kubelet[2366]: W0113 20:41:56.604605 2366 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.244.100.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:56.604705 kubelet[2366]: E0113 20:41:56.604700 2366 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.100.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:56.832493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount886085768.mount: Deactivated successfully. Jan 13 20:41:56.838837 containerd[1511]: time="2025-01-13T20:41:56.837451323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:41:56.839177 containerd[1511]: time="2025-01-13T20:41:56.839119324Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:41:56.839580 containerd[1511]: time="2025-01-13T20:41:56.839546544Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:41:56.840308 containerd[1511]: time="2025-01-13T20:41:56.840277269Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:41:56.840838 containerd[1511]: time="2025-01-13T20:41:56.840662721Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:41:56.841319 containerd[1511]: time="2025-01-13T20:41:56.841290463Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 20:41:56.841471 containerd[1511]: time="2025-01-13T20:41:56.841454120Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:41:56.848211 containerd[1511]: time="2025-01-13T20:41:56.848175527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:41:56.849235 containerd[1511]: time="2025-01-13T20:41:56.849208313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 850.762001ms" Jan 13 20:41:56.851974 containerd[1511]: time="2025-01-13T20:41:56.851373493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 879.240244ms" Jan 13 20:41:56.855859 containerd[1511]: time="2025-01-13T20:41:56.855801173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 866.141107ms" Jan 13 20:41:56.881715 kubelet[2366]: W0113 20:41:56.880869 2366 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.244.100.150:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:56.881715 kubelet[2366]: E0113 20:41:56.880938 2366 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.100.150:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:56.937825 kubelet[2366]: E0113 20:41:56.937715 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.100.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rxqun.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.100.150:6443: connect: connection refused" interval="1.6s" Jan 13 20:41:57.029319 containerd[1511]: time="2025-01-13T20:41:57.029182822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:57.029747 containerd[1511]: time="2025-01-13T20:41:57.029411247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:57.029747 containerd[1511]: time="2025-01-13T20:41:57.029468135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:57.029747 containerd[1511]: time="2025-01-13T20:41:57.029482772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:57.029747 containerd[1511]: time="2025-01-13T20:41:57.029578509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:57.029883 containerd[1511]: time="2025-01-13T20:41:57.029764374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:57.030001 containerd[1511]: time="2025-01-13T20:41:57.029884501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:57.030419 containerd[1511]: time="2025-01-13T20:41:57.030373354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:57.032672 containerd[1511]: time="2025-01-13T20:41:57.032575307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:57.032672 containerd[1511]: time="2025-01-13T20:41:57.032631926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:57.033704 containerd[1511]: time="2025-01-13T20:41:57.032650048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:57.033704 containerd[1511]: time="2025-01-13T20:41:57.032881670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:57.042964 kubelet[2366]: I0113 20:41:57.042941 2366 kubelet_node_status.go:73] "Attempting to register node" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:41:57.043716 kubelet[2366]: E0113 20:41:57.043603 2366 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.100.150:6443/api/v1/nodes\": dial tcp 10.244.100.150:6443: connect: connection refused" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:41:57.062101 systemd[1]: Started cri-containerd-de6685c4a55a6661fff2cdfb1dfeea6fe1ea61d7e686037d6c7252182271ed30.scope - libcontainer container de6685c4a55a6661fff2cdfb1dfeea6fe1ea61d7e686037d6c7252182271ed30. Jan 13 20:41:57.067470 systemd[1]: Started cri-containerd-48ada1b7f48c32c23dbb8d7c979a5df0440334159f399b934bc5b64382c84001.scope - libcontainer container 48ada1b7f48c32c23dbb8d7c979a5df0440334159f399b934bc5b64382c84001. Jan 13 20:41:57.070248 systemd[1]: Started cri-containerd-d38f06c1778fb4c66ccc30569e47d84fd88ce3f321c3d4bb42bf92c2a26aa3e3.scope - libcontainer container d38f06c1778fb4c66ccc30569e47d84fd88ce3f321c3d4bb42bf92c2a26aa3e3. Jan 13 20:41:57.142295 containerd[1511]: time="2025-01-13T20:41:57.142037467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-rxqun.gb1.brightbox.com,Uid:38b809e6f4afdeea20aa27958aa5dd42,Namespace:kube-system,Attempt:0,} returns sandbox id \"de6685c4a55a6661fff2cdfb1dfeea6fe1ea61d7e686037d6c7252182271ed30\"" Jan 13 20:41:57.148347 containerd[1511]: time="2025-01-13T20:41:57.148220758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-rxqun.gb1.brightbox.com,Uid:372903a90d249851911dab290cc368a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"48ada1b7f48c32c23dbb8d7c979a5df0440334159f399b934bc5b64382c84001\"" Jan 13 20:41:57.152825 containerd[1511]: time="2025-01-13T20:41:57.151895383Z" level=info msg="CreateContainer within sandbox \"de6685c4a55a6661fff2cdfb1dfeea6fe1ea61d7e686037d6c7252182271ed30\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:41:57.153411 containerd[1511]: time="2025-01-13T20:41:57.153385324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-rxqun.gb1.brightbox.com,Uid:2553898a3b509d89950dcc3c092ac175,Namespace:kube-system,Attempt:0,} returns sandbox id \"d38f06c1778fb4c66ccc30569e47d84fd88ce3f321c3d4bb42bf92c2a26aa3e3\"" Jan 13 20:41:57.155146 containerd[1511]: time="2025-01-13T20:41:57.155029648Z" level=info msg="CreateContainer within sandbox \"48ada1b7f48c32c23dbb8d7c979a5df0440334159f399b934bc5b64382c84001\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:41:57.163691 containerd[1511]: time="2025-01-13T20:41:57.163571149Z" level=info msg="CreateContainer within sandbox \"d38f06c1778fb4c66ccc30569e47d84fd88ce3f321c3d4bb42bf92c2a26aa3e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:41:57.169839 containerd[1511]: time="2025-01-13T20:41:57.169316314Z" level=info msg="CreateContainer within sandbox \"48ada1b7f48c32c23dbb8d7c979a5df0440334159f399b934bc5b64382c84001\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f66ee953ceb1ee1f60fcc6fb771a4ead183ddb922177581f0b7bb012fc05634f\"" Jan 13 20:41:57.171742 containerd[1511]: time="2025-01-13T20:41:57.170748849Z" level=info msg="StartContainer for \"f66ee953ceb1ee1f60fcc6fb771a4ead183ddb922177581f0b7bb012fc05634f\"" Jan 13 20:41:57.172045 containerd[1511]: time="2025-01-13T20:41:57.172019878Z" level=info msg="CreateContainer within sandbox \"de6685c4a55a6661fff2cdfb1dfeea6fe1ea61d7e686037d6c7252182271ed30\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f50db74f162d1752b2346d3a0579bb6b5e87029a15f76f4620631dac1df07d17\"" Jan 13 20:41:57.173157 containerd[1511]: time="2025-01-13T20:41:57.173130387Z" level=info msg="StartContainer for \"f50db74f162d1752b2346d3a0579bb6b5e87029a15f76f4620631dac1df07d17\"" Jan 13 20:41:57.178167 containerd[1511]: time="2025-01-13T20:41:57.178133731Z" level=info msg="CreateContainer within sandbox \"d38f06c1778fb4c66ccc30569e47d84fd88ce3f321c3d4bb42bf92c2a26aa3e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b1705447502ac7f7eaac2c90f6d87e8e6e380f8f708e7e15d8a11f877f938e0b\"" Jan 13 20:41:57.179231 containerd[1511]: time="2025-01-13T20:41:57.179208523Z" level=info msg="StartContainer for \"b1705447502ac7f7eaac2c90f6d87e8e6e380f8f708e7e15d8a11f877f938e0b\"" Jan 13 20:41:57.213173 systemd[1]: Started cri-containerd-f50db74f162d1752b2346d3a0579bb6b5e87029a15f76f4620631dac1df07d17.scope - libcontainer container f50db74f162d1752b2346d3a0579bb6b5e87029a15f76f4620631dac1df07d17. Jan 13 20:41:57.219199 systemd[1]: Started cri-containerd-f66ee953ceb1ee1f60fcc6fb771a4ead183ddb922177581f0b7bb012fc05634f.scope - libcontainer container f66ee953ceb1ee1f60fcc6fb771a4ead183ddb922177581f0b7bb012fc05634f. Jan 13 20:41:57.228938 systemd[1]: Started cri-containerd-b1705447502ac7f7eaac2c90f6d87e8e6e380f8f708e7e15d8a11f877f938e0b.scope - libcontainer container b1705447502ac7f7eaac2c90f6d87e8e6e380f8f708e7e15d8a11f877f938e0b. Jan 13 20:41:57.291117 containerd[1511]: time="2025-01-13T20:41:57.291077605Z" level=info msg="StartContainer for \"f66ee953ceb1ee1f60fcc6fb771a4ead183ddb922177581f0b7bb012fc05634f\" returns successfully" Jan 13 20:41:57.302698 containerd[1511]: time="2025-01-13T20:41:57.302575959Z" level=info msg="StartContainer for \"f50db74f162d1752b2346d3a0579bb6b5e87029a15f76f4620631dac1df07d17\" returns successfully" Jan 13 20:41:57.313394 containerd[1511]: time="2025-01-13T20:41:57.313345681Z" level=info msg="StartContainer for \"b1705447502ac7f7eaac2c90f6d87e8e6e380f8f708e7e15d8a11f877f938e0b\" returns successfully" Jan 13 20:41:57.565778 kubelet[2366]: E0113 20:41:57.565023 2366 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.100.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.100.150:6443: connect: connection refused Jan 13 20:41:58.647748 kubelet[2366]: I0113 20:41:58.647713 2366 kubelet_node_status.go:73] "Attempting to register node" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:41:59.285937 kubelet[2366]: E0113 20:41:59.285895 2366 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-rxqun.gb1.brightbox.com\" not found" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:41:59.330808 kubelet[2366]: I0113 20:41:59.330740 2366 kubelet_node_status.go:76] "Successfully registered node" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:41:59.497074 kubelet[2366]: I0113 20:41:59.496984 2366 apiserver.go:52] "Watching apiserver" Jan 13 20:41:59.534041 kubelet[2366]: I0113 20:41:59.533943 2366 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:41:59.560320 update_engine[1497]: I20250113 20:41:59.560037 1497 update_attempter.cc:509] Updating boot flags... Jan 13 20:41:59.607232 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2642) Jan 13 20:41:59.690964 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2646) Jan 13 20:41:59.750047 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2646) Jan 13 20:42:02.248832 systemd[1]: Reloading requested from client PID 2651 ('systemctl') (unit session-9.scope)... Jan 13 20:42:02.248852 systemd[1]: Reloading... Jan 13 20:42:02.359810 zram_generator::config[2693]: No configuration found. Jan 13 20:42:02.520282 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:42:02.619150 systemd[1]: Reloading finished in 369 ms. Jan 13 20:42:02.664225 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:42:02.681139 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:42:02.681431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:42:02.681522 systemd[1]: kubelet.service: Consumed 1.477s CPU time, 111.8M memory peak, 0B memory swap peak. Jan 13 20:42:02.691283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:42:02.818513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:42:02.826683 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:42:02.921791 kubelet[2752]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:42:02.921791 kubelet[2752]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:42:02.921791 kubelet[2752]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:42:02.921791 kubelet[2752]: I0113 20:42:02.920347 2752 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:42:02.926917 kubelet[2752]: I0113 20:42:02.926881 2752 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:42:02.927118 kubelet[2752]: I0113 20:42:02.927102 2752 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:42:02.927513 kubelet[2752]: I0113 20:42:02.927497 2752 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:42:02.929535 kubelet[2752]: I0113 20:42:02.929506 2752 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:42:02.935056 kubelet[2752]: I0113 20:42:02.935019 2752 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:42:02.941247 sudo[2767]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:42:02.941611 sudo[2767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:42:02.951803 kubelet[2752]: I0113 20:42:02.951732 2752 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:42:02.952230 kubelet[2752]: I0113 20:42:02.952212 2752 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:42:02.952516 kubelet[2752]: I0113 20:42:02.952496 2752 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:42:02.952702 kubelet[2752]: I0113 20:42:02.952688 2752 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:42:02.952785 kubelet[2752]: I0113 20:42:02.952776 2752 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:42:02.952896 kubelet[2752]: I0113 20:42:02.952886 2752 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:42:02.953824 kubelet[2752]: I0113 20:42:02.953064 2752 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:42:02.953824 kubelet[2752]: I0113 20:42:02.953089 2752 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:42:02.953824 kubelet[2752]: I0113 20:42:02.953124 2752 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:42:02.953824 kubelet[2752]: I0113 20:42:02.953142 2752 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:42:02.957776 kubelet[2752]: I0113 20:42:02.957059 2752 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:42:02.958162 kubelet[2752]: I0113 20:42:02.958144 2752 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:42:02.959302 kubelet[2752]: I0113 20:42:02.958688 2752 server.go:1256] "Started kubelet" Jan 13 20:42:02.962151 kubelet[2752]: I0113 20:42:02.962128 2752 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:42:02.972133 kubelet[2752]: I0113 20:42:02.972100 2752 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:42:02.987215 kubelet[2752]: I0113 20:42:02.987178 2752 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:42:02.993824 kubelet[2752]: I0113 20:42:02.972329 2752 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:42:02.993824 kubelet[2752]: I0113 20:42:02.991314 2752 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:42:02.993824 kubelet[2752]: I0113 20:42:02.974681 2752 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:42:02.993824 kubelet[2752]: I0113 20:42:02.974696 2752 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:42:02.993824 kubelet[2752]: I0113 20:42:02.991586 2752 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:42:02.993824 kubelet[2752]: I0113 20:42:02.992118 2752 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:42:02.993824 kubelet[2752]: I0113 20:42:02.992386 2752 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:42:03.009554 kubelet[2752]: I0113 20:42:03.009360 2752 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:42:03.017921 kubelet[2752]: I0113 20:42:03.017888 2752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:42:03.020056 kubelet[2752]: I0113 20:42:03.020031 2752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:42:03.020300 kubelet[2752]: I0113 20:42:03.020290 2752 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:42:03.020437 kubelet[2752]: I0113 20:42:03.020426 2752 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:42:03.020598 kubelet[2752]: E0113 20:42:03.020588 2752 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:42:03.090849 kubelet[2752]: I0113 20:42:03.090730 2752 kubelet_node_status.go:73] "Attempting to register node" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.111346 kubelet[2752]: I0113 20:42:03.111233 2752 kubelet_node_status.go:112] "Node was previously registered" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.111847 kubelet[2752]: I0113 20:42:03.111729 2752 kubelet_node_status.go:76] "Successfully registered node" node="srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.120839 kubelet[2752]: E0113 20:42:03.120807 2752 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:42:03.125721 kubelet[2752]: I0113 20:42:03.125028 2752 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:42:03.125721 kubelet[2752]: I0113 20:42:03.125050 2752 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:42:03.125721 kubelet[2752]: I0113 20:42:03.125068 2752 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:42:03.125721 kubelet[2752]: I0113 20:42:03.125227 2752 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:42:03.125721 kubelet[2752]: I0113 20:42:03.125249 2752 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:42:03.125721 kubelet[2752]: I0113 20:42:03.125260 2752 policy_none.go:49] "None policy: Start" Jan 13 20:42:03.127781 kubelet[2752]: I0113 20:42:03.127033 2752 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:42:03.127781 kubelet[2752]: I0113 20:42:03.127060 2752 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:42:03.127781 kubelet[2752]: I0113 20:42:03.127253 2752 state_mem.go:75] "Updated machine memory state" Jan 13 20:42:03.136732 kubelet[2752]: I0113 20:42:03.136697 2752 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:42:03.139948 kubelet[2752]: I0113 20:42:03.139924 2752 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:42:03.321053 kubelet[2752]: I0113 20:42:03.321005 2752 topology_manager.go:215] "Topology Admit Handler" podUID="38b809e6f4afdeea20aa27958aa5dd42" podNamespace="kube-system" podName="kube-apiserver-srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.321214 kubelet[2752]: I0113 20:42:03.321168 2752 topology_manager.go:215] "Topology Admit Handler" podUID="372903a90d249851911dab290cc368a5" podNamespace="kube-system" podName="kube-controller-manager-srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.321249 kubelet[2752]: I0113 20:42:03.321233 2752 topology_manager.go:215] "Topology Admit Handler" podUID="2553898a3b509d89950dcc3c092ac175" podNamespace="kube-system" podName="kube-scheduler-srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.333570 kubelet[2752]: W0113 20:42:03.333536 2752 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:42:03.333988 kubelet[2752]: W0113 20:42:03.333819 2752 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:42:03.336253 kubelet[2752]: W0113 20:42:03.336005 2752 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:42:03.394883 kubelet[2752]: I0113 20:42:03.394766 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/372903a90d249851911dab290cc368a5-ca-certs\") pod \"kube-controller-manager-srv-rxqun.gb1.brightbox.com\" (UID: \"372903a90d249851911dab290cc368a5\") " pod="kube-system/kube-controller-manager-srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.394883 kubelet[2752]: I0113 20:42:03.394815 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/372903a90d249851911dab290cc368a5-flexvolume-dir\") pod \"kube-controller-manager-srv-rxqun.gb1.brightbox.com\" (UID: \"372903a90d249851911dab290cc368a5\") " pod="kube-system/kube-controller-manager-srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.394883 kubelet[2752]: I0113 20:42:03.394850 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/372903a90d249851911dab290cc368a5-k8s-certs\") pod \"kube-controller-manager-srv-rxqun.gb1.brightbox.com\" (UID: \"372903a90d249851911dab290cc368a5\") " pod="kube-system/kube-controller-manager-srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.394883 kubelet[2752]: I0113 20:42:03.394878 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2553898a3b509d89950dcc3c092ac175-kubeconfig\") pod \"kube-scheduler-srv-rxqun.gb1.brightbox.com\" (UID: \"2553898a3b509d89950dcc3c092ac175\") " pod="kube-system/kube-scheduler-srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.395174 kubelet[2752]: I0113 20:42:03.394900 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38b809e6f4afdeea20aa27958aa5dd42-ca-certs\") pod \"kube-apiserver-srv-rxqun.gb1.brightbox.com\" (UID: \"38b809e6f4afdeea20aa27958aa5dd42\") " pod="kube-system/kube-apiserver-srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.395174 kubelet[2752]: I0113 20:42:03.394927 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38b809e6f4afdeea20aa27958aa5dd42-usr-share-ca-certificates\") pod \"kube-apiserver-srv-rxqun.gb1.brightbox.com\" (UID: \"38b809e6f4afdeea20aa27958aa5dd42\") " pod="kube-system/kube-apiserver-srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.395174 kubelet[2752]: I0113 20:42:03.394947 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/372903a90d249851911dab290cc368a5-kubeconfig\") pod \"kube-controller-manager-srv-rxqun.gb1.brightbox.com\" (UID: \"372903a90d249851911dab290cc368a5\") " pod="kube-system/kube-controller-manager-srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.395174 kubelet[2752]: I0113 20:42:03.394969 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/372903a90d249851911dab290cc368a5-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-rxqun.gb1.brightbox.com\" (UID: \"372903a90d249851911dab290cc368a5\") " pod="kube-system/kube-controller-manager-srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.395174 kubelet[2752]: I0113 20:42:03.394989 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38b809e6f4afdeea20aa27958aa5dd42-k8s-certs\") pod \"kube-apiserver-srv-rxqun.gb1.brightbox.com\" (UID: \"38b809e6f4afdeea20aa27958aa5dd42\") " pod="kube-system/kube-apiserver-srv-rxqun.gb1.brightbox.com" Jan 13 20:42:03.694515 sudo[2767]: pam_unix(sudo:session): session closed for user root Jan 13 20:42:03.954366 kubelet[2752]: I0113 20:42:03.954039 2752 apiserver.go:52] "Watching apiserver" Jan 13 20:42:03.992442 kubelet[2752]: I0113 20:42:03.992221 2752 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:42:04.168077 kubelet[2752]: I0113 20:42:04.167894 2752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-rxqun.gb1.brightbox.com" podStartSLOduration=1.167844515 podStartE2EDuration="1.167844515s" podCreationTimestamp="2025-01-13 20:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:42:04.164902125 +0000 UTC m=+1.330671988" watchObservedRunningTime="2025-01-13 20:42:04.167844515 +0000 UTC m=+1.333614355" Jan 13 20:42:04.168077 kubelet[2752]: I0113 20:42:04.168013 2752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-rxqun.gb1.brightbox.com" podStartSLOduration=1.167995503 podStartE2EDuration="1.167995503s" podCreationTimestamp="2025-01-13 20:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:42:04.13485963 +0000 UTC m=+1.300629490" watchObservedRunningTime="2025-01-13 20:42:04.167995503 +0000 UTC m=+1.333765342" Jan 13 20:42:04.227731 kubelet[2752]: I0113 20:42:04.226080 2752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-rxqun.gb1.brightbox.com" podStartSLOduration=1.226028143 podStartE2EDuration="1.226028143s" podCreationTimestamp="2025-01-13 20:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:42:04.191897189 +0000 UTC m=+1.357667052" watchObservedRunningTime="2025-01-13 20:42:04.226028143 +0000 UTC m=+1.391797986" Jan 13 20:42:05.366851 sudo[1758]: pam_unix(sudo:session): session closed for user root Jan 13 20:42:05.508943 sshd[1757]: Connection closed by 139.178.68.195 port 49746 Jan 13 20:42:05.509911 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jan 13 20:42:05.514952 systemd[1]: sshd@6-10.244.100.150:22-139.178.68.195:49746.service: Deactivated successfully. Jan 13 20:42:05.516817 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:42:05.517069 systemd[1]: session-9.scope: Consumed 6.289s CPU time, 185.2M memory peak, 0B memory swap peak. Jan 13 20:42:05.517587 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:42:05.518827 systemd-logind[1496]: Removed session 9. Jan 13 20:42:17.897133 kubelet[2752]: I0113 20:42:17.897099 2752 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:42:17.899428 containerd[1511]: time="2025-01-13T20:42:17.898802559Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:42:17.899836 kubelet[2752]: I0113 20:42:17.899546 2752 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:42:17.946410 kubelet[2752]: I0113 20:42:17.946302 2752 topology_manager.go:215] "Topology Admit Handler" podUID="df67c9fd-a360-4bc9-8adb-72048cd156ff" podNamespace="kube-system" podName="kube-proxy-dmj7m" Jan 13 20:42:17.953446 kubelet[2752]: I0113 20:42:17.952615 2752 topology_manager.go:215] "Topology Admit Handler" podUID="da4cb543-c89a-4c22-8787-b25d9d6b8778" podNamespace="kube-system" podName="cilium-operator-5cc964979-5qwc4" Jan 13 20:42:17.953446 kubelet[2752]: I0113 20:42:17.953174 2752 topology_manager.go:215] "Topology Admit Handler" podUID="ddb2c517-f113-4df6-a44a-3046960b02a0" podNamespace="kube-system" podName="cilium-chd27" Jan 13 20:42:17.978499 systemd[1]: Created slice kubepods-besteffort-poddf67c9fd_a360_4bc9_8adb_72048cd156ff.slice - libcontainer container kubepods-besteffort-poddf67c9fd_a360_4bc9_8adb_72048cd156ff.slice. Jan 13 20:42:18.000984 systemd[1]: Created slice kubepods-besteffort-podda4cb543_c89a_4c22_8787_b25d9d6b8778.slice - libcontainer container kubepods-besteffort-podda4cb543_c89a_4c22_8787_b25d9d6b8778.slice. Jan 13 20:42:18.016423 systemd[1]: Created slice kubepods-burstable-podddb2c517_f113_4df6_a44a_3046960b02a0.slice - libcontainer container kubepods-burstable-podddb2c517_f113_4df6_a44a_3046960b02a0.slice. Jan 13 20:42:18.086867 kubelet[2752]: I0113 20:42:18.086774 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvkf\" (UniqueName: \"kubernetes.io/projected/da4cb543-c89a-4c22-8787-b25d9d6b8778-kube-api-access-8zvkf\") pod \"cilium-operator-5cc964979-5qwc4\" (UID: \"da4cb543-c89a-4c22-8787-b25d9d6b8778\") " pod="kube-system/cilium-operator-5cc964979-5qwc4" Jan 13 20:42:18.086867 kubelet[2752]: I0113 20:42:18.086865 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s5dp\" (UniqueName: \"kubernetes.io/projected/ddb2c517-f113-4df6-a44a-3046960b02a0-kube-api-access-7s5dp\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.086867 kubelet[2752]: I0113 20:42:18.086897 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/df67c9fd-a360-4bc9-8adb-72048cd156ff-kube-proxy\") pod \"kube-proxy-dmj7m\" (UID: \"df67c9fd-a360-4bc9-8adb-72048cd156ff\") " pod="kube-system/kube-proxy-dmj7m" Jan 13 20:42:18.087395 kubelet[2752]: I0113 20:42:18.086927 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df67c9fd-a360-4bc9-8adb-72048cd156ff-xtables-lock\") pod \"kube-proxy-dmj7m\" (UID: \"df67c9fd-a360-4bc9-8adb-72048cd156ff\") " pod="kube-system/kube-proxy-dmj7m" Jan 13 20:42:18.087395 kubelet[2752]: I0113 20:42:18.086948 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-cilium-run\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.087395 kubelet[2752]: I0113 20:42:18.086971 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddb2c517-f113-4df6-a44a-3046960b02a0-cilium-config-path\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.087395 kubelet[2752]: I0113 20:42:18.087001 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-hostproc\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.087395 kubelet[2752]: I0113 20:42:18.087033 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-host-proc-sys-net\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.087395 kubelet[2752]: I0113 20:42:18.087057 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddb2c517-f113-4df6-a44a-3046960b02a0-hubble-tls\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.089244 kubelet[2752]: I0113 20:42:18.087083 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75s2m\" (UniqueName: \"kubernetes.io/projected/df67c9fd-a360-4bc9-8adb-72048cd156ff-kube-api-access-75s2m\") pod \"kube-proxy-dmj7m\" (UID: \"df67c9fd-a360-4bc9-8adb-72048cd156ff\") " pod="kube-system/kube-proxy-dmj7m" Jan 13 20:42:18.089244 kubelet[2752]: I0113 20:42:18.087108 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-bpf-maps\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.089244 kubelet[2752]: I0113 20:42:18.087129 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-lib-modules\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.089244 kubelet[2752]: I0113 20:42:18.087152 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddb2c517-f113-4df6-a44a-3046960b02a0-clustermesh-secrets\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.089244 kubelet[2752]: I0113 20:42:18.087173 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df67c9fd-a360-4bc9-8adb-72048cd156ff-lib-modules\") pod \"kube-proxy-dmj7m\" (UID: \"df67c9fd-a360-4bc9-8adb-72048cd156ff\") " pod="kube-system/kube-proxy-dmj7m" Jan 13 20:42:18.089244 kubelet[2752]: I0113 20:42:18.087206 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-cilium-cgroup\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.089710 kubelet[2752]: I0113 20:42:18.087235 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-xtables-lock\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.089710 kubelet[2752]: I0113 20:42:18.087259 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-cni-path\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.089710 kubelet[2752]: I0113 20:42:18.087292 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da4cb543-c89a-4c22-8787-b25d9d6b8778-cilium-config-path\") pod \"cilium-operator-5cc964979-5qwc4\" (UID: \"da4cb543-c89a-4c22-8787-b25d9d6b8778\") " pod="kube-system/cilium-operator-5cc964979-5qwc4" Jan 13 20:42:18.089710 kubelet[2752]: I0113 20:42:18.087329 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-etc-cni-netd\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.089710 kubelet[2752]: I0113 20:42:18.087354 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-host-proc-sys-kernel\") pod \"cilium-chd27\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " pod="kube-system/cilium-chd27" Jan 13 20:42:18.298081 containerd[1511]: time="2025-01-13T20:42:18.297299332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dmj7m,Uid:df67c9fd-a360-4bc9-8adb-72048cd156ff,Namespace:kube-system,Attempt:0,}" Jan 13 20:42:18.312671 containerd[1511]: time="2025-01-13T20:42:18.312109833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-5qwc4,Uid:da4cb543-c89a-4c22-8787-b25d9d6b8778,Namespace:kube-system,Attempt:0,}" Jan 13 20:42:18.326378 containerd[1511]: time="2025-01-13T20:42:18.326323972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-chd27,Uid:ddb2c517-f113-4df6-a44a-3046960b02a0,Namespace:kube-system,Attempt:0,}" Jan 13 20:42:18.339399 containerd[1511]: time="2025-01-13T20:42:18.339308997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:42:18.339558 containerd[1511]: time="2025-01-13T20:42:18.339415065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:42:18.339558 containerd[1511]: time="2025-01-13T20:42:18.339457624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:42:18.339781 containerd[1511]: time="2025-01-13T20:42:18.339587782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:42:18.374948 systemd[1]: Started cri-containerd-d350d05e6a7d15fa581f56117f0aed52b5ae37f7a14a0501abc85b1e82cedd6d.scope - libcontainer container d350d05e6a7d15fa581f56117f0aed52b5ae37f7a14a0501abc85b1e82cedd6d. Jan 13 20:42:18.380127 containerd[1511]: time="2025-01-13T20:42:18.377803271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:42:18.380127 containerd[1511]: time="2025-01-13T20:42:18.377854459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:42:18.380127 containerd[1511]: time="2025-01-13T20:42:18.377865894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:42:18.380825 containerd[1511]: time="2025-01-13T20:42:18.380416522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:42:18.383400 containerd[1511]: time="2025-01-13T20:42:18.382101701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:42:18.383587 containerd[1511]: time="2025-01-13T20:42:18.383543304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:42:18.383692 containerd[1511]: time="2025-01-13T20:42:18.383673107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:42:18.383915 containerd[1511]: time="2025-01-13T20:42:18.383873252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:42:18.413004 systemd[1]: Started cri-containerd-7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd.scope - libcontainer container 7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd. Jan 13 20:42:18.425549 containerd[1511]: time="2025-01-13T20:42:18.425410381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dmj7m,Uid:df67c9fd-a360-4bc9-8adb-72048cd156ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"d350d05e6a7d15fa581f56117f0aed52b5ae37f7a14a0501abc85b1e82cedd6d\"" Jan 13 20:42:18.430166 systemd[1]: Started cri-containerd-f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423.scope - libcontainer container f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423. Jan 13 20:42:18.432530 containerd[1511]: time="2025-01-13T20:42:18.432472524Z" level=info msg="CreateContainer within sandbox \"d350d05e6a7d15fa581f56117f0aed52b5ae37f7a14a0501abc85b1e82cedd6d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:42:18.453808 containerd[1511]: time="2025-01-13T20:42:18.453212964Z" level=info msg="CreateContainer within sandbox \"d350d05e6a7d15fa581f56117f0aed52b5ae37f7a14a0501abc85b1e82cedd6d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a06930a1f25dc948d1695ddbd325d48d1dfa83617d71a01a5dc0c8526f231afa\"" Jan 13 20:42:18.454502 containerd[1511]: time="2025-01-13T20:42:18.454474870Z" level=info msg="StartContainer for \"a06930a1f25dc948d1695ddbd325d48d1dfa83617d71a01a5dc0c8526f231afa\"" Jan 13 20:42:18.480486 containerd[1511]: time="2025-01-13T20:42:18.480447426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-chd27,Uid:ddb2c517-f113-4df6-a44a-3046960b02a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\"" Jan 13 20:42:18.482973 containerd[1511]: time="2025-01-13T20:42:18.482948788Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:42:18.500703 containerd[1511]: time="2025-01-13T20:42:18.500584618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-5qwc4,Uid:da4cb543-c89a-4c22-8787-b25d9d6b8778,Namespace:kube-system,Attempt:0,} returns sandbox id \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\"" Jan 13 20:42:18.508940 systemd[1]: Started cri-containerd-a06930a1f25dc948d1695ddbd325d48d1dfa83617d71a01a5dc0c8526f231afa.scope - libcontainer container a06930a1f25dc948d1695ddbd325d48d1dfa83617d71a01a5dc0c8526f231afa. Jan 13 20:42:18.538124 containerd[1511]: time="2025-01-13T20:42:18.538078982Z" level=info msg="StartContainer for \"a06930a1f25dc948d1695ddbd325d48d1dfa83617d71a01a5dc0c8526f231afa\" returns successfully" Jan 13 20:42:19.117777 kubelet[2752]: I0113 20:42:19.116960 2752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dmj7m" podStartSLOduration=2.116913145 podStartE2EDuration="2.116913145s" podCreationTimestamp="2025-01-13 20:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:42:19.116567982 +0000 UTC m=+16.282337846" watchObservedRunningTime="2025-01-13 20:42:19.116913145 +0000 UTC m=+16.282683014" Jan 13 20:42:24.846239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount467995747.mount: Deactivated successfully. Jan 13 20:42:30.507943 containerd[1511]: time="2025-01-13T20:42:30.507603119Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:42:30.508680 containerd[1511]: time="2025-01-13T20:42:30.508290870Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734699" Jan 13 20:42:30.525565 containerd[1511]: time="2025-01-13T20:42:30.524970931Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:42:30.530723 containerd[1511]: time="2025-01-13T20:42:30.530177335Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.046966626s" Jan 13 20:42:30.530723 containerd[1511]: time="2025-01-13T20:42:30.530224890Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:42:30.532143 containerd[1511]: time="2025-01-13T20:42:30.532108049Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:42:30.537464 containerd[1511]: time="2025-01-13T20:42:30.537089659Z" level=info msg="CreateContainer within sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:42:30.612681 containerd[1511]: time="2025-01-13T20:42:30.612549816Z" level=info msg="CreateContainer within sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703\"" Jan 13 20:42:30.613457 containerd[1511]: time="2025-01-13T20:42:30.613433123Z" level=info msg="StartContainer for \"c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703\"" Jan 13 20:42:30.746946 systemd[1]: Started cri-containerd-c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703.scope - libcontainer container c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703. Jan 13 20:42:30.774535 containerd[1511]: time="2025-01-13T20:42:30.774417381Z" level=info msg="StartContainer for \"c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703\" returns successfully" Jan 13 20:42:30.792841 systemd[1]: cri-containerd-c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703.scope: Deactivated successfully. Jan 13 20:42:30.871711 containerd[1511]: time="2025-01-13T20:42:30.865307491Z" level=info msg="shim disconnected" id=c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703 namespace=k8s.io Jan 13 20:42:30.871954 containerd[1511]: time="2025-01-13T20:42:30.871933451Z" level=warning msg="cleaning up after shim disconnected" id=c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703 namespace=k8s.io Jan 13 20:42:30.872023 containerd[1511]: time="2025-01-13T20:42:30.872012029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:42:31.172369 containerd[1511]: time="2025-01-13T20:42:31.172295658Z" level=info msg="CreateContainer within sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:42:31.181203 containerd[1511]: time="2025-01-13T20:42:31.181126032Z" level=info msg="CreateContainer within sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674\"" Jan 13 20:42:31.182517 containerd[1511]: time="2025-01-13T20:42:31.182388056Z" level=info msg="StartContainer for \"2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674\"" Jan 13 20:42:31.231431 systemd[1]: Started cri-containerd-2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674.scope - libcontainer container 2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674. Jan 13 20:42:31.274520 containerd[1511]: time="2025-01-13T20:42:31.274388787Z" level=info msg="StartContainer for \"2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674\" returns successfully" Jan 13 20:42:31.295387 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:42:31.296102 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:42:31.296291 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:42:31.304376 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:42:31.304747 systemd[1]: cri-containerd-2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674.scope: Deactivated successfully. Jan 13 20:42:31.340000 containerd[1511]: time="2025-01-13T20:42:31.339914817Z" level=info msg="shim disconnected" id=2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674 namespace=k8s.io Jan 13 20:42:31.340436 containerd[1511]: time="2025-01-13T20:42:31.340135166Z" level=warning msg="cleaning up after shim disconnected" id=2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674 namespace=k8s.io Jan 13 20:42:31.340436 containerd[1511]: time="2025-01-13T20:42:31.340159317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:42:31.349087 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:42:31.607020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703-rootfs.mount: Deactivated successfully. Jan 13 20:42:32.190804 containerd[1511]: time="2025-01-13T20:42:32.190696069Z" level=info msg="CreateContainer within sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:42:32.229633 containerd[1511]: time="2025-01-13T20:42:32.229572124Z" level=info msg="CreateContainer within sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae\"" Jan 13 20:42:32.231108 containerd[1511]: time="2025-01-13T20:42:32.230579382Z" level=info msg="StartContainer for \"ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae\"" Jan 13 20:42:32.275005 systemd[1]: Started cri-containerd-ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae.scope - libcontainer container ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae. Jan 13 20:42:32.318786 containerd[1511]: time="2025-01-13T20:42:32.318681066Z" level=info msg="StartContainer for \"ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae\" returns successfully" Jan 13 20:42:32.322866 systemd[1]: cri-containerd-ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae.scope: Deactivated successfully. Jan 13 20:42:32.347605 containerd[1511]: time="2025-01-13T20:42:32.347487328Z" level=info msg="shim disconnected" id=ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae namespace=k8s.io Jan 13 20:42:32.347605 containerd[1511]: time="2025-01-13T20:42:32.347559869Z" level=warning msg="cleaning up after shim disconnected" id=ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae namespace=k8s.io Jan 13 20:42:32.347605 containerd[1511]: time="2025-01-13T20:42:32.347568792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:42:32.609667 systemd[1]: run-containerd-runc-k8s.io-ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae-runc.ckOrmK.mount: Deactivated successfully. Jan 13 20:42:32.609959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae-rootfs.mount: Deactivated successfully. Jan 13 20:42:33.198225 containerd[1511]: time="2025-01-13T20:42:33.197401662Z" level=info msg="CreateContainer within sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:42:33.217717 containerd[1511]: time="2025-01-13T20:42:33.217573258Z" level=info msg="CreateContainer within sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71\"" Jan 13 20:42:33.225389 containerd[1511]: time="2025-01-13T20:42:33.225336140Z" level=info msg="StartContainer for \"0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71\"" Jan 13 20:42:33.267174 systemd[1]: Started cri-containerd-0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71.scope - libcontainer container 0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71. Jan 13 20:42:33.300236 systemd[1]: cri-containerd-0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71.scope: Deactivated successfully. Jan 13 20:42:33.302573 containerd[1511]: time="2025-01-13T20:42:33.302250624Z" level=info msg="StartContainer for \"0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71\" returns successfully" Jan 13 20:42:33.333825 containerd[1511]: time="2025-01-13T20:42:33.332941820Z" level=info msg="shim disconnected" id=0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71 namespace=k8s.io Jan 13 20:42:33.333825 containerd[1511]: time="2025-01-13T20:42:33.333017184Z" level=warning msg="cleaning up after shim disconnected" id=0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71 namespace=k8s.io Jan 13 20:42:33.333825 containerd[1511]: time="2025-01-13T20:42:33.333027920Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:42:33.609435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71-rootfs.mount: Deactivated successfully. Jan 13 20:42:34.206522 containerd[1511]: time="2025-01-13T20:42:34.206478716Z" level=info msg="CreateContainer within sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:42:34.222342 containerd[1511]: time="2025-01-13T20:42:34.221806590Z" level=info msg="CreateContainer within sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\"" Jan 13 20:42:34.223008 containerd[1511]: time="2025-01-13T20:42:34.222977476Z" level=info msg="StartContainer for \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\"" Jan 13 20:42:34.275953 systemd[1]: Started cri-containerd-4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2.scope - libcontainer container 4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2. Jan 13 20:42:34.313401 containerd[1511]: time="2025-01-13T20:42:34.313345726Z" level=info msg="StartContainer for \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\" returns successfully" Jan 13 20:42:34.475403 kubelet[2752]: I0113 20:42:34.475286 2752 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:42:34.508269 kubelet[2752]: I0113 20:42:34.508228 2752 topology_manager.go:215] "Topology Admit Handler" podUID="32f2f316-59bd-49f9-9c4d-fc2aefe64dbb" podNamespace="kube-system" podName="coredns-76f75df574-zxbx9" Jan 13 20:42:34.514331 kubelet[2752]: I0113 20:42:34.514299 2752 topology_manager.go:215] "Topology Admit Handler" podUID="2abb113f-e8b1-47a2-a5c0-56ec59d1e92a" podNamespace="kube-system" podName="coredns-76f75df574-8qssv" Jan 13 20:42:34.522212 systemd[1]: Created slice kubepods-burstable-pod32f2f316_59bd_49f9_9c4d_fc2aefe64dbb.slice - libcontainer container kubepods-burstable-pod32f2f316_59bd_49f9_9c4d_fc2aefe64dbb.slice. Jan 13 20:42:34.529719 systemd[1]: Created slice kubepods-burstable-pod2abb113f_e8b1_47a2_a5c0_56ec59d1e92a.slice - libcontainer container kubepods-burstable-pod2abb113f_e8b1_47a2_a5c0_56ec59d1e92a.slice. Jan 13 20:42:34.707017 kubelet[2752]: I0113 20:42:34.706974 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87sqf\" (UniqueName: \"kubernetes.io/projected/32f2f316-59bd-49f9-9c4d-fc2aefe64dbb-kube-api-access-87sqf\") pod \"coredns-76f75df574-zxbx9\" (UID: \"32f2f316-59bd-49f9-9c4d-fc2aefe64dbb\") " pod="kube-system/coredns-76f75df574-zxbx9" Jan 13 20:42:34.707017 kubelet[2752]: I0113 20:42:34.707025 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggqlx\" (UniqueName: \"kubernetes.io/projected/2abb113f-e8b1-47a2-a5c0-56ec59d1e92a-kube-api-access-ggqlx\") pod \"coredns-76f75df574-8qssv\" (UID: \"2abb113f-e8b1-47a2-a5c0-56ec59d1e92a\") " pod="kube-system/coredns-76f75df574-8qssv" Jan 13 20:42:34.707220 kubelet[2752]: I0113 20:42:34.707053 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32f2f316-59bd-49f9-9c4d-fc2aefe64dbb-config-volume\") pod \"coredns-76f75df574-zxbx9\" (UID: \"32f2f316-59bd-49f9-9c4d-fc2aefe64dbb\") " pod="kube-system/coredns-76f75df574-zxbx9" Jan 13 20:42:34.707220 kubelet[2752]: I0113 20:42:34.707098 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2abb113f-e8b1-47a2-a5c0-56ec59d1e92a-config-volume\") pod \"coredns-76f75df574-8qssv\" (UID: \"2abb113f-e8b1-47a2-a5c0-56ec59d1e92a\") " pod="kube-system/coredns-76f75df574-8qssv" Jan 13 20:42:35.128918 containerd[1511]: time="2025-01-13T20:42:35.128820011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zxbx9,Uid:32f2f316-59bd-49f9-9c4d-fc2aefe64dbb,Namespace:kube-system,Attempt:0,}" Jan 13 20:42:35.137728 containerd[1511]: time="2025-01-13T20:42:35.137560695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8qssv,Uid:2abb113f-e8b1-47a2-a5c0-56ec59d1e92a,Namespace:kube-system,Attempt:0,}" Jan 13 20:42:35.246672 kubelet[2752]: I0113 20:42:35.246594 2752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-chd27" podStartSLOduration=6.197610714 podStartE2EDuration="18.246501712s" podCreationTimestamp="2025-01-13 20:42:17 +0000 UTC" firstStartedPulling="2025-01-13 20:42:18.482473588 +0000 UTC m=+15.648243424" lastFinishedPulling="2025-01-13 20:42:30.531364573 +0000 UTC m=+27.697134422" observedRunningTime="2025-01-13 20:42:35.244323443 +0000 UTC m=+32.410093306" watchObservedRunningTime="2025-01-13 20:42:35.246501712 +0000 UTC m=+32.412271703" Jan 13 20:42:35.665382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3351514539.mount: Deactivated successfully. Jan 13 20:42:36.317876 containerd[1511]: time="2025-01-13T20:42:36.317738749Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:42:36.319014 containerd[1511]: time="2025-01-13T20:42:36.318972205Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18905969" Jan 13 20:42:36.319858 containerd[1511]: time="2025-01-13T20:42:36.319836413Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:42:36.321881 containerd[1511]: time="2025-01-13T20:42:36.321854552Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.789563572s" Jan 13 20:42:36.321961 containerd[1511]: time="2025-01-13T20:42:36.321888830Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:42:36.323486 containerd[1511]: time="2025-01-13T20:42:36.323461874Z" level=info msg="CreateContainer within sandbox \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:42:36.336107 containerd[1511]: time="2025-01-13T20:42:36.336070728Z" level=info msg="CreateContainer within sandbox \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\"" Jan 13 20:42:36.337044 containerd[1511]: time="2025-01-13T20:42:36.336981896Z" level=info msg="StartContainer for \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\"" Jan 13 20:42:36.364938 systemd[1]: Started cri-containerd-66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63.scope - libcontainer container 66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63. Jan 13 20:42:36.396669 containerd[1511]: time="2025-01-13T20:42:36.396623341Z" level=info msg="StartContainer for \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\" returns successfully" Jan 13 20:42:37.256708 kubelet[2752]: I0113 20:42:37.256309 2752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-5qwc4" podStartSLOduration=2.437458281 podStartE2EDuration="20.25617815s" podCreationTimestamp="2025-01-13 20:42:17 +0000 UTC" firstStartedPulling="2025-01-13 20:42:18.503404046 +0000 UTC m=+15.669173887" lastFinishedPulling="2025-01-13 20:42:36.322123913 +0000 UTC m=+33.487893756" observedRunningTime="2025-01-13 20:42:37.255686299 +0000 UTC m=+34.421456255" watchObservedRunningTime="2025-01-13 20:42:37.25617815 +0000 UTC m=+34.421948097" Jan 13 20:42:39.319106 systemd-networkd[1443]: cilium_host: Link UP Jan 13 20:42:39.319658 systemd-networkd[1443]: cilium_net: Link UP Jan 13 20:42:39.320723 systemd-networkd[1443]: cilium_net: Gained carrier Jan 13 20:42:39.320947 systemd-networkd[1443]: cilium_host: Gained carrier Jan 13 20:42:39.471874 systemd-networkd[1443]: cilium_vxlan: Link UP Jan 13 20:42:39.471884 systemd-networkd[1443]: cilium_vxlan: Gained carrier Jan 13 20:42:39.778903 systemd-networkd[1443]: cilium_host: Gained IPv6LL Jan 13 20:42:39.848919 kernel: NET: Registered PF_ALG protocol family Jan 13 20:42:39.922022 systemd-networkd[1443]: cilium_net: Gained IPv6LL Jan 13 20:42:40.562249 systemd-networkd[1443]: cilium_vxlan: Gained IPv6LL Jan 13 20:42:40.648269 systemd-networkd[1443]: lxc_health: Link UP Jan 13 20:42:40.652954 systemd-networkd[1443]: lxc_health: Gained carrier Jan 13 20:42:41.213012 systemd-networkd[1443]: lxc46bb88f3e75f: Link UP Jan 13 20:42:41.227452 kernel: eth0: renamed from tmp4f26a Jan 13 20:42:41.235887 systemd-networkd[1443]: lxc46bb88f3e75f: Gained carrier Jan 13 20:42:41.261693 systemd-networkd[1443]: lxc72ac084a931d: Link UP Jan 13 20:42:41.272801 kernel: eth0: renamed from tmpb7d96 Jan 13 20:42:41.278691 systemd-networkd[1443]: lxc72ac084a931d: Gained carrier Jan 13 20:42:42.482153 systemd-networkd[1443]: lxc46bb88f3e75f: Gained IPv6LL Jan 13 20:42:42.610126 systemd-networkd[1443]: lxc_health: Gained IPv6LL Jan 13 20:42:42.610570 systemd-networkd[1443]: lxc72ac084a931d: Gained IPv6LL Jan 13 20:42:45.618475 containerd[1511]: time="2025-01-13T20:42:45.617516032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:42:45.618475 containerd[1511]: time="2025-01-13T20:42:45.617579269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:42:45.618475 containerd[1511]: time="2025-01-13T20:42:45.617595146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:42:45.618475 containerd[1511]: time="2025-01-13T20:42:45.617707132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:42:45.625830 containerd[1511]: time="2025-01-13T20:42:45.624114326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:42:45.625830 containerd[1511]: time="2025-01-13T20:42:45.624166545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:42:45.625830 containerd[1511]: time="2025-01-13T20:42:45.624182390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:42:45.625830 containerd[1511]: time="2025-01-13T20:42:45.624291955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:42:45.687720 systemd[1]: Started cri-containerd-4f26afce9e23473a1d837b37a7b26958289ad18ca3f3859d524eb00883b5cb9a.scope - libcontainer container 4f26afce9e23473a1d837b37a7b26958289ad18ca3f3859d524eb00883b5cb9a. Jan 13 20:42:45.690857 systemd[1]: Started cri-containerd-b7d968e56f8b7891a99324b4b2f30ba8785206362d9238ca2f27c081203767db.scope - libcontainer container b7d968e56f8b7891a99324b4b2f30ba8785206362d9238ca2f27c081203767db. Jan 13 20:42:45.758338 containerd[1511]: time="2025-01-13T20:42:45.758151924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zxbx9,Uid:32f2f316-59bd-49f9-9c4d-fc2aefe64dbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7d968e56f8b7891a99324b4b2f30ba8785206362d9238ca2f27c081203767db\"" Jan 13 20:42:45.762783 containerd[1511]: time="2025-01-13T20:42:45.762391848Z" level=info msg="CreateContainer within sandbox \"b7d968e56f8b7891a99324b4b2f30ba8785206362d9238ca2f27c081203767db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:42:45.791142 containerd[1511]: time="2025-01-13T20:42:45.791085409Z" level=info msg="CreateContainer within sandbox \"b7d968e56f8b7891a99324b4b2f30ba8785206362d9238ca2f27c081203767db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"edba6473ac6b88c7624f4511d4a64af8df2598824f43d954de42fd5be92476fd\"" Jan 13 20:42:45.793980 containerd[1511]: time="2025-01-13T20:42:45.792359834Z" level=info msg="StartContainer for \"edba6473ac6b88c7624f4511d4a64af8df2598824f43d954de42fd5be92476fd\"" Jan 13 20:42:45.798201 containerd[1511]: time="2025-01-13T20:42:45.798165488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8qssv,Uid:2abb113f-e8b1-47a2-a5c0-56ec59d1e92a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f26afce9e23473a1d837b37a7b26958289ad18ca3f3859d524eb00883b5cb9a\"" Jan 13 20:42:45.804948 containerd[1511]: time="2025-01-13T20:42:45.803959608Z" level=info msg="CreateContainer within sandbox \"4f26afce9e23473a1d837b37a7b26958289ad18ca3f3859d524eb00883b5cb9a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:42:45.815636 containerd[1511]: time="2025-01-13T20:42:45.815595876Z" level=info msg="CreateContainer within sandbox \"4f26afce9e23473a1d837b37a7b26958289ad18ca3f3859d524eb00883b5cb9a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a325e3fac699d1aabeb1e9e29843469e720669c376cf039f6338da3a7f67400b\"" Jan 13 20:42:45.816541 containerd[1511]: time="2025-01-13T20:42:45.816520439Z" level=info msg="StartContainer for \"a325e3fac699d1aabeb1e9e29843469e720669c376cf039f6338da3a7f67400b\"" Jan 13 20:42:45.845981 systemd[1]: Started cri-containerd-edba6473ac6b88c7624f4511d4a64af8df2598824f43d954de42fd5be92476fd.scope - libcontainer container edba6473ac6b88c7624f4511d4a64af8df2598824f43d954de42fd5be92476fd. Jan 13 20:42:45.866960 systemd[1]: Started cri-containerd-a325e3fac699d1aabeb1e9e29843469e720669c376cf039f6338da3a7f67400b.scope - libcontainer container a325e3fac699d1aabeb1e9e29843469e720669c376cf039f6338da3a7f67400b. Jan 13 20:42:45.890906 containerd[1511]: time="2025-01-13T20:42:45.890045411Z" level=info msg="StartContainer for \"edba6473ac6b88c7624f4511d4a64af8df2598824f43d954de42fd5be92476fd\" returns successfully" Jan 13 20:42:45.901472 containerd[1511]: time="2025-01-13T20:42:45.901392933Z" level=info msg="StartContainer for \"a325e3fac699d1aabeb1e9e29843469e720669c376cf039f6338da3a7f67400b\" returns successfully" Jan 13 20:42:46.288135 kubelet[2752]: I0113 20:42:46.287912 2752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-8qssv" podStartSLOduration=29.287870055 podStartE2EDuration="29.287870055s" podCreationTimestamp="2025-01-13 20:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:42:46.285360932 +0000 UTC m=+43.451130793" watchObservedRunningTime="2025-01-13 20:42:46.287870055 +0000 UTC m=+43.453639917" Jan 13 20:42:46.318484 kubelet[2752]: I0113 20:42:46.318440 2752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zxbx9" podStartSLOduration=29.318382202 podStartE2EDuration="29.318382202s" podCreationTimestamp="2025-01-13 20:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:42:46.303485527 +0000 UTC m=+43.469255391" watchObservedRunningTime="2025-01-13 20:42:46.318382202 +0000 UTC m=+43.484152042" Jan 13 20:42:46.637340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579998664.mount: Deactivated successfully. Jan 13 20:43:29.078092 systemd[1]: Started sshd@7-10.244.100.150:22-139.178.68.195:45928.service - OpenSSH per-connection server daemon (139.178.68.195:45928). Jan 13 20:43:30.029572 sshd[4134]: Accepted publickey for core from 139.178.68.195 port 45928 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:43:30.033365 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:43:30.041997 systemd-logind[1496]: New session 10 of user core. Jan 13 20:43:30.049205 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:43:31.129112 sshd[4136]: Connection closed by 139.178.68.195 port 45928 Jan 13 20:43:31.130084 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Jan 13 20:43:31.136636 systemd[1]: sshd@7-10.244.100.150:22-139.178.68.195:45928.service: Deactivated successfully. Jan 13 20:43:31.138878 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:43:31.139619 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:43:31.141043 systemd-logind[1496]: Removed session 10. Jan 13 20:43:36.290088 systemd[1]: Started sshd@8-10.244.100.150:22-139.178.68.195:48092.service - OpenSSH per-connection server daemon (139.178.68.195:48092). Jan 13 20:43:37.184918 sshd[4148]: Accepted publickey for core from 139.178.68.195 port 48092 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:43:37.186876 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:43:37.192635 systemd-logind[1496]: New session 11 of user core. Jan 13 20:43:37.204970 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:43:37.907087 sshd[4150]: Connection closed by 139.178.68.195 port 48092 Jan 13 20:43:37.908051 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jan 13 20:43:37.913137 systemd[1]: sshd@8-10.244.100.150:22-139.178.68.195:48092.service: Deactivated successfully. Jan 13 20:43:37.915896 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:43:37.917075 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:43:37.918318 systemd-logind[1496]: Removed session 11. Jan 13 20:43:43.064997 systemd[1]: Started sshd@9-10.244.100.150:22-139.178.68.195:48094.service - OpenSSH per-connection server daemon (139.178.68.195:48094). Jan 13 20:43:43.964118 sshd[4163]: Accepted publickey for core from 139.178.68.195 port 48094 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:43:43.967630 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:43:43.978600 systemd-logind[1496]: New session 12 of user core. Jan 13 20:43:43.984209 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:43:44.676695 sshd[4165]: Connection closed by 139.178.68.195 port 48094 Jan 13 20:43:44.678133 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Jan 13 20:43:44.687445 systemd[1]: sshd@9-10.244.100.150:22-139.178.68.195:48094.service: Deactivated successfully. Jan 13 20:43:44.687887 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:43:44.691716 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:43:44.693385 systemd-logind[1496]: Removed session 12. Jan 13 20:43:49.841635 systemd[1]: Started sshd@10-10.244.100.150:22-139.178.68.195:42514.service - OpenSSH per-connection server daemon (139.178.68.195:42514). Jan 13 20:43:50.772170 sshd[4180]: Accepted publickey for core from 139.178.68.195 port 42514 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:43:50.774499 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:43:50.782806 systemd-logind[1496]: New session 13 of user core. Jan 13 20:43:50.788030 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:43:51.487115 sshd[4182]: Connection closed by 139.178.68.195 port 42514 Jan 13 20:43:51.488580 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Jan 13 20:43:51.494928 systemd[1]: sshd@10-10.244.100.150:22-139.178.68.195:42514.service: Deactivated successfully. Jan 13 20:43:51.498554 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:43:51.500310 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:43:51.501635 systemd-logind[1496]: Removed session 13. Jan 13 20:43:51.652097 systemd[1]: Started sshd@11-10.244.100.150:22-139.178.68.195:42530.service - OpenSSH per-connection server daemon (139.178.68.195:42530). Jan 13 20:43:52.563122 sshd[4194]: Accepted publickey for core from 139.178.68.195 port 42530 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:43:52.564839 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:43:52.572404 systemd-logind[1496]: New session 14 of user core. Jan 13 20:43:52.583049 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:43:53.318167 sshd[4196]: Connection closed by 139.178.68.195 port 42530 Jan 13 20:43:53.319301 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Jan 13 20:43:53.329107 systemd[1]: sshd@11-10.244.100.150:22-139.178.68.195:42530.service: Deactivated successfully. Jan 13 20:43:53.333597 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:43:53.337658 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:43:53.340059 systemd-logind[1496]: Removed session 14. Jan 13 20:43:53.485254 systemd[1]: Started sshd@12-10.244.100.150:22-139.178.68.195:42534.service - OpenSSH per-connection server daemon (139.178.68.195:42534). Jan 13 20:43:54.431084 sshd[4205]: Accepted publickey for core from 139.178.68.195 port 42534 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:43:54.433568 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:43:54.442012 systemd-logind[1496]: New session 15 of user core. Jan 13 20:43:54.449996 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:43:55.159589 sshd[4207]: Connection closed by 139.178.68.195 port 42534 Jan 13 20:43:55.160564 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Jan 13 20:43:55.166378 systemd[1]: sshd@12-10.244.100.150:22-139.178.68.195:42534.service: Deactivated successfully. Jan 13 20:43:55.169557 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:43:55.170822 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:43:55.171820 systemd-logind[1496]: Removed session 15. Jan 13 20:44:00.323059 systemd[1]: Started sshd@13-10.244.100.150:22-139.178.68.195:59186.service - OpenSSH per-connection server daemon (139.178.68.195:59186). Jan 13 20:44:01.230147 sshd[4219]: Accepted publickey for core from 139.178.68.195 port 59186 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:01.233263 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:01.240380 systemd-logind[1496]: New session 16 of user core. Jan 13 20:44:01.245404 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:44:01.935567 sshd[4221]: Connection closed by 139.178.68.195 port 59186 Jan 13 20:44:01.939142 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:01.943431 systemd[1]: sshd@13-10.244.100.150:22-139.178.68.195:59186.service: Deactivated successfully. Jan 13 20:44:01.946232 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:44:01.948750 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:44:01.950091 systemd-logind[1496]: Removed session 16. Jan 13 20:44:07.096051 systemd[1]: Started sshd@14-10.244.100.150:22-139.178.68.195:59100.service - OpenSSH per-connection server daemon (139.178.68.195:59100). Jan 13 20:44:08.004013 sshd[4234]: Accepted publickey for core from 139.178.68.195 port 59100 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:08.006492 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:08.013731 systemd-logind[1496]: New session 17 of user core. Jan 13 20:44:08.024009 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:44:08.704834 sshd[4236]: Connection closed by 139.178.68.195 port 59100 Jan 13 20:44:08.706046 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:08.713432 systemd[1]: sshd@14-10.244.100.150:22-139.178.68.195:59100.service: Deactivated successfully. Jan 13 20:44:08.716256 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:44:08.717985 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:44:08.721360 systemd-logind[1496]: Removed session 17. Jan 13 20:44:13.860931 systemd[1]: Started sshd@15-10.244.100.150:22-139.178.68.195:59114.service - OpenSSH per-connection server daemon (139.178.68.195:59114). Jan 13 20:44:14.752164 sshd[4248]: Accepted publickey for core from 139.178.68.195 port 59114 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:14.754985 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:14.766180 systemd-logind[1496]: New session 18 of user core. Jan 13 20:44:14.771421 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:44:15.459641 sshd[4250]: Connection closed by 139.178.68.195 port 59114 Jan 13 20:44:15.461316 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:15.470676 systemd[1]: sshd@15-10.244.100.150:22-139.178.68.195:59114.service: Deactivated successfully. Jan 13 20:44:15.475271 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:44:15.478341 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:44:15.480584 systemd-logind[1496]: Removed session 18. Jan 13 20:44:15.628455 systemd[1]: Started sshd@16-10.244.100.150:22-139.178.68.195:40734.service - OpenSSH per-connection server daemon (139.178.68.195:40734). Jan 13 20:44:16.546097 sshd[4261]: Accepted publickey for core from 139.178.68.195 port 40734 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:16.549736 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:16.557271 systemd-logind[1496]: New session 19 of user core. Jan 13 20:44:16.566033 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:44:17.501040 sshd[4263]: Connection closed by 139.178.68.195 port 40734 Jan 13 20:44:17.503013 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:17.517061 systemd[1]: sshd@16-10.244.100.150:22-139.178.68.195:40734.service: Deactivated successfully. Jan 13 20:44:17.519483 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:44:17.521567 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:44:17.522867 systemd-logind[1496]: Removed session 19. Jan 13 20:44:17.664266 systemd[1]: Started sshd@17-10.244.100.150:22-139.178.68.195:40742.service - OpenSSH per-connection server daemon (139.178.68.195:40742). Jan 13 20:44:18.579084 sshd[4272]: Accepted publickey for core from 139.178.68.195 port 40742 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:18.582702 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:18.590466 systemd-logind[1496]: New session 20 of user core. Jan 13 20:44:18.595936 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:44:21.021669 sshd[4275]: Connection closed by 139.178.68.195 port 40742 Jan 13 20:44:21.025047 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:21.034511 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:44:21.035562 systemd[1]: sshd@17-10.244.100.150:22-139.178.68.195:40742.service: Deactivated successfully. Jan 13 20:44:21.039378 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:44:21.042353 systemd-logind[1496]: Removed session 20. Jan 13 20:44:21.184632 systemd[1]: Started sshd@18-10.244.100.150:22-139.178.68.195:40752.service - OpenSSH per-connection server daemon (139.178.68.195:40752). Jan 13 20:44:22.115825 sshd[4293]: Accepted publickey for core from 139.178.68.195 port 40752 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:22.119681 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:22.128852 systemd-logind[1496]: New session 21 of user core. Jan 13 20:44:22.139209 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:44:23.094511 sshd[4295]: Connection closed by 139.178.68.195 port 40752 Jan 13 20:44:23.094346 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:23.102414 systemd[1]: sshd@18-10.244.100.150:22-139.178.68.195:40752.service: Deactivated successfully. Jan 13 20:44:23.105832 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:44:23.107222 systemd-logind[1496]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:44:23.108717 systemd-logind[1496]: Removed session 21. Jan 13 20:44:23.258303 systemd[1]: Started sshd@19-10.244.100.150:22-139.178.68.195:40766.service - OpenSSH per-connection server daemon (139.178.68.195:40766). Jan 13 20:44:24.154718 sshd[4304]: Accepted publickey for core from 139.178.68.195 port 40766 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:24.158744 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:24.167679 systemd-logind[1496]: New session 22 of user core. Jan 13 20:44:24.172906 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:44:24.852087 sshd[4306]: Connection closed by 139.178.68.195 port 40766 Jan 13 20:44:24.851899 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:24.859125 systemd[1]: sshd@19-10.244.100.150:22-139.178.68.195:40766.service: Deactivated successfully. Jan 13 20:44:24.862545 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:44:24.866228 systemd-logind[1496]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:44:24.867340 systemd-logind[1496]: Removed session 22. Jan 13 20:44:30.015154 systemd[1]: Started sshd@20-10.244.100.150:22-139.178.68.195:42454.service - OpenSSH per-connection server daemon (139.178.68.195:42454). Jan 13 20:44:30.903746 sshd[4320]: Accepted publickey for core from 139.178.68.195 port 42454 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:30.906507 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:30.914896 systemd-logind[1496]: New session 23 of user core. Jan 13 20:44:30.922120 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:44:31.608299 sshd[4322]: Connection closed by 139.178.68.195 port 42454 Jan 13 20:44:31.611324 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:31.617400 systemd-logind[1496]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:44:31.618598 systemd[1]: sshd@20-10.244.100.150:22-139.178.68.195:42454.service: Deactivated successfully. Jan 13 20:44:31.621704 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:44:31.624802 systemd-logind[1496]: Removed session 23. Jan 13 20:44:36.776387 systemd[1]: Started sshd@21-10.244.100.150:22-139.178.68.195:49874.service - OpenSSH per-connection server daemon (139.178.68.195:49874). Jan 13 20:44:37.679912 sshd[4333]: Accepted publickey for core from 139.178.68.195 port 49874 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:37.682329 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:37.693396 systemd-logind[1496]: New session 24 of user core. Jan 13 20:44:37.700171 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:44:38.376707 sshd[4335]: Connection closed by 139.178.68.195 port 49874 Jan 13 20:44:38.377537 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:38.383029 systemd[1]: sshd@21-10.244.100.150:22-139.178.68.195:49874.service: Deactivated successfully. Jan 13 20:44:38.386470 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:44:38.388572 systemd-logind[1496]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:44:38.389752 systemd-logind[1496]: Removed session 24. Jan 13 20:44:43.550883 systemd[1]: Started sshd@22-10.244.100.150:22-139.178.68.195:49876.service - OpenSSH per-connection server daemon (139.178.68.195:49876). Jan 13 20:44:44.448316 sshd[4346]: Accepted publickey for core from 139.178.68.195 port 49876 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:44.451787 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:44.461627 systemd-logind[1496]: New session 25 of user core. Jan 13 20:44:44.473258 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:44:45.157801 sshd[4348]: Connection closed by 139.178.68.195 port 49876 Jan 13 20:44:45.158517 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:45.162877 systemd[1]: sshd@22-10.244.100.150:22-139.178.68.195:49876.service: Deactivated successfully. Jan 13 20:44:45.165604 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:44:45.166662 systemd-logind[1496]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:44:45.168342 systemd-logind[1496]: Removed session 25. Jan 13 20:44:45.321105 systemd[1]: Started sshd@23-10.244.100.150:22-139.178.68.195:53536.service - OpenSSH per-connection server daemon (139.178.68.195:53536). Jan 13 20:44:46.224512 sshd[4359]: Accepted publickey for core from 139.178.68.195 port 53536 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:46.228654 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:46.239018 systemd-logind[1496]: New session 26 of user core. Jan 13 20:44:46.246969 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:44:48.310805 containerd[1511]: time="2025-01-13T20:44:48.309777084Z" level=info msg="StopContainer for \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\" with timeout 30 (s)" Jan 13 20:44:48.314720 containerd[1511]: time="2025-01-13T20:44:48.314518947Z" level=info msg="Stop container \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\" with signal terminated" Jan 13 20:44:48.346041 systemd[1]: run-containerd-runc-k8s.io-4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2-runc.8qlnOn.mount: Deactivated successfully. Jan 13 20:44:48.346900 systemd[1]: cri-containerd-66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63.scope: Deactivated successfully. Jan 13 20:44:48.370617 containerd[1511]: time="2025-01-13T20:44:48.370493501Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:44:48.389954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63-rootfs.mount: Deactivated successfully. Jan 13 20:44:48.399432 containerd[1511]: time="2025-01-13T20:44:48.399305819Z" level=info msg="shim disconnected" id=66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63 namespace=k8s.io Jan 13 20:44:48.399432 containerd[1511]: time="2025-01-13T20:44:48.399427118Z" level=warning msg="cleaning up after shim disconnected" id=66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63 namespace=k8s.io Jan 13 20:44:48.399432 containerd[1511]: time="2025-01-13T20:44:48.399436979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:44:48.403070 containerd[1511]: time="2025-01-13T20:44:48.403039767Z" level=info msg="StopContainer for \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\" with timeout 2 (s)" Jan 13 20:44:48.403397 containerd[1511]: time="2025-01-13T20:44:48.403367436Z" level=info msg="Stop container \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\" with signal terminated" Jan 13 20:44:48.416932 systemd-networkd[1443]: lxc_health: Link DOWN Jan 13 20:44:48.417996 systemd-networkd[1443]: lxc_health: Lost carrier Jan 13 20:44:48.433505 systemd[1]: cri-containerd-4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2.scope: Deactivated successfully. Jan 13 20:44:48.433780 systemd[1]: cri-containerd-4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2.scope: Consumed 8.031s CPU time. Jan 13 20:44:48.437745 containerd[1511]: time="2025-01-13T20:44:48.437474807Z" level=info msg="StopContainer for \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\" returns successfully" Jan 13 20:44:48.438710 containerd[1511]: time="2025-01-13T20:44:48.438684584Z" level=info msg="StopPodSandbox for \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\"" Jan 13 20:44:48.447006 containerd[1511]: time="2025-01-13T20:44:48.440066052Z" level=info msg="Container to stop \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:44:48.450010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd-shm.mount: Deactivated successfully. Jan 13 20:44:48.463217 systemd[1]: cri-containerd-7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd.scope: Deactivated successfully. Jan 13 20:44:48.473474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2-rootfs.mount: Deactivated successfully. Jan 13 20:44:48.478723 containerd[1511]: time="2025-01-13T20:44:48.478497102Z" level=info msg="shim disconnected" id=4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2 namespace=k8s.io Jan 13 20:44:48.478723 containerd[1511]: time="2025-01-13T20:44:48.478578066Z" level=warning msg="cleaning up after shim disconnected" id=4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2 namespace=k8s.io Jan 13 20:44:48.478723 containerd[1511]: time="2025-01-13T20:44:48.478589667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:44:48.497287 containerd[1511]: time="2025-01-13T20:44:48.497043288Z" level=info msg="shim disconnected" id=7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd namespace=k8s.io Jan 13 20:44:48.497287 containerd[1511]: time="2025-01-13T20:44:48.497110548Z" level=warning msg="cleaning up after shim disconnected" id=7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd namespace=k8s.io Jan 13 20:44:48.497287 containerd[1511]: time="2025-01-13T20:44:48.497161819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:44:48.502016 containerd[1511]: time="2025-01-13T20:44:48.501977952Z" level=info msg="StopContainer for \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\" returns successfully" Jan 13 20:44:48.502705 containerd[1511]: time="2025-01-13T20:44:48.502445073Z" level=info msg="StopPodSandbox for \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\"" Jan 13 20:44:48.502705 containerd[1511]: time="2025-01-13T20:44:48.502480675Z" level=info msg="Container to stop \"c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:44:48.502705 containerd[1511]: time="2025-01-13T20:44:48.502512940Z" level=info msg="Container to stop \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:44:48.502705 containerd[1511]: time="2025-01-13T20:44:48.502522490Z" level=info msg="Container to stop \"2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:44:48.502705 containerd[1511]: time="2025-01-13T20:44:48.502531395Z" level=info msg="Container to stop \"ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:44:48.502705 containerd[1511]: time="2025-01-13T20:44:48.502539932Z" level=info msg="Container to stop \"0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:44:48.513484 systemd[1]: cri-containerd-f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423.scope: Deactivated successfully. Jan 13 20:44:48.516621 containerd[1511]: time="2025-01-13T20:44:48.516564565Z" level=info msg="TearDown network for sandbox \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\" successfully" Jan 13 20:44:48.516621 containerd[1511]: time="2025-01-13T20:44:48.516603953Z" level=info msg="StopPodSandbox for \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\" returns successfully" Jan 13 20:44:48.552877 containerd[1511]: time="2025-01-13T20:44:48.552694253Z" level=info msg="shim disconnected" id=f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423 namespace=k8s.io Jan 13 20:44:48.553253 containerd[1511]: time="2025-01-13T20:44:48.552749969Z" level=warning msg="cleaning up after shim disconnected" id=f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423 namespace=k8s.io Jan 13 20:44:48.553253 containerd[1511]: time="2025-01-13T20:44:48.552923738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:44:48.572361 containerd[1511]: time="2025-01-13T20:44:48.572124734Z" level=info msg="TearDown network for sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" successfully" Jan 13 20:44:48.572361 containerd[1511]: time="2025-01-13T20:44:48.572159903Z" level=info msg="StopPodSandbox for \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" returns successfully" Jan 13 20:44:48.572769 kubelet[2752]: I0113 20:44:48.572725 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da4cb543-c89a-4c22-8787-b25d9d6b8778-cilium-config-path\") pod \"da4cb543-c89a-4c22-8787-b25d9d6b8778\" (UID: \"da4cb543-c89a-4c22-8787-b25d9d6b8778\") " Jan 13 20:44:48.573603 kubelet[2752]: I0113 20:44:48.572883 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zvkf\" (UniqueName: \"kubernetes.io/projected/da4cb543-c89a-4c22-8787-b25d9d6b8778-kube-api-access-8zvkf\") pod \"da4cb543-c89a-4c22-8787-b25d9d6b8778\" (UID: \"da4cb543-c89a-4c22-8787-b25d9d6b8778\") " Jan 13 20:44:48.582001 kubelet[2752]: I0113 20:44:48.576386 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4cb543-c89a-4c22-8787-b25d9d6b8778-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "da4cb543-c89a-4c22-8787-b25d9d6b8778" (UID: "da4cb543-c89a-4c22-8787-b25d9d6b8778"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:44:48.591944 kubelet[2752]: I0113 20:44:48.591909 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4cb543-c89a-4c22-8787-b25d9d6b8778-kube-api-access-8zvkf" (OuterVolumeSpecName: "kube-api-access-8zvkf") pod "da4cb543-c89a-4c22-8787-b25d9d6b8778" (UID: "da4cb543-c89a-4c22-8787-b25d9d6b8778"). InnerVolumeSpecName "kube-api-access-8zvkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:44:48.673866 kubelet[2752]: I0113 20:44:48.673687 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddb2c517-f113-4df6-a44a-3046960b02a0-clustermesh-secrets\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.674299 kubelet[2752]: I0113 20:44:48.674269 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-host-proc-sys-net\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.674563 kubelet[2752]: I0113 20:44:48.674540 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-host-proc-sys-kernel\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.674806 kubelet[2752]: I0113 20:44:48.674786 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddb2c517-f113-4df6-a44a-3046960b02a0-cilium-config-path\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.675226 kubelet[2752]: I0113 20:44:48.675175 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-cilium-cgroup\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.675451 kubelet[2752]: I0113 20:44:48.675418 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-hostproc\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.675791 kubelet[2752]: I0113 20:44:48.675610 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddb2c517-f113-4df6-a44a-3046960b02a0-hubble-tls\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.675791 kubelet[2752]: I0113 20:44:48.675694 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-cni-path\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.676583 kubelet[2752]: I0113 20:44:48.676158 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-bpf-maps\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.676583 kubelet[2752]: I0113 20:44:48.676323 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:44:48.676583 kubelet[2752]: I0113 20:44:48.676404 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:44:48.676583 kubelet[2752]: I0113 20:44:48.676446 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:44:48.677291 kubelet[2752]: I0113 20:44:48.677105 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb2c517-f113-4df6-a44a-3046960b02a0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:44:48.677291 kubelet[2752]: I0113 20:44:48.677119 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-lib-modules\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.677291 kubelet[2752]: I0113 20:44:48.677165 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:44:48.677291 kubelet[2752]: I0113 20:44:48.677192 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-cilium-run\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.677291 kubelet[2752]: I0113 20:44:48.677219 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-xtables-lock\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.677959 kubelet[2752]: I0113 20:44:48.677217 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-hostproc" (OuterVolumeSpecName: "hostproc") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:44:48.677959 kubelet[2752]: I0113 20:44:48.677249 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s5dp\" (UniqueName: \"kubernetes.io/projected/ddb2c517-f113-4df6-a44a-3046960b02a0-kube-api-access-7s5dp\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.677959 kubelet[2752]: I0113 20:44:48.677275 2752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-etc-cni-netd\") pod \"ddb2c517-f113-4df6-a44a-3046960b02a0\" (UID: \"ddb2c517-f113-4df6-a44a-3046960b02a0\") " Jan 13 20:44:48.677959 kubelet[2752]: I0113 20:44:48.677318 2752 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-host-proc-sys-net\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.677959 kubelet[2752]: I0113 20:44:48.677334 2752 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-cilium-cgroup\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.677959 kubelet[2752]: I0113 20:44:48.677349 2752 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-host-proc-sys-kernel\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.677959 kubelet[2752]: I0113 20:44:48.677362 2752 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-hostproc\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.678401 kubelet[2752]: I0113 20:44:48.677378 2752 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-lib-modules\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.678401 kubelet[2752]: I0113 20:44:48.677392 2752 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da4cb543-c89a-4c22-8787-b25d9d6b8778-cilium-config-path\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.678401 kubelet[2752]: I0113 20:44:48.677405 2752 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8zvkf\" (UniqueName: \"kubernetes.io/projected/da4cb543-c89a-4c22-8787-b25d9d6b8778-kube-api-access-8zvkf\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.678401 kubelet[2752]: I0113 20:44:48.677419 2752 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddb2c517-f113-4df6-a44a-3046960b02a0-clustermesh-secrets\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.678401 kubelet[2752]: I0113 20:44:48.677443 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:44:48.678401 kubelet[2752]: I0113 20:44:48.677466 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:44:48.679905 kubelet[2752]: I0113 20:44:48.677484 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:44:48.680130 kubelet[2752]: I0113 20:44:48.680096 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-cni-path" (OuterVolumeSpecName: "cni-path") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:44:48.680216 kubelet[2752]: I0113 20:44:48.680148 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:44:48.682019 kubelet[2752]: I0113 20:44:48.681934 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb2c517-f113-4df6-a44a-3046960b02a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:44:48.682867 kubelet[2752]: I0113 20:44:48.682835 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb2c517-f113-4df6-a44a-3046960b02a0-kube-api-access-7s5dp" (OuterVolumeSpecName: "kube-api-access-7s5dp") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "kube-api-access-7s5dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:44:48.685010 kubelet[2752]: I0113 20:44:48.684928 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb2c517-f113-4df6-a44a-3046960b02a0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ddb2c517-f113-4df6-a44a-3046960b02a0" (UID: "ddb2c517-f113-4df6-a44a-3046960b02a0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:44:48.699726 systemd[1]: Removed slice kubepods-burstable-podddb2c517_f113_4df6_a44a_3046960b02a0.slice - libcontainer container kubepods-burstable-podddb2c517_f113_4df6_a44a_3046960b02a0.slice. Jan 13 20:44:48.699984 systemd[1]: kubepods-burstable-podddb2c517_f113_4df6_a44a_3046960b02a0.slice: Consumed 8.133s CPU time. Jan 13 20:44:48.708451 kubelet[2752]: I0113 20:44:48.707506 2752 scope.go:117] "RemoveContainer" containerID="4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2" Jan 13 20:44:48.728612 containerd[1511]: time="2025-01-13T20:44:48.726645919Z" level=info msg="RemoveContainer for \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\"" Jan 13 20:44:48.729777 systemd[1]: Removed slice kubepods-besteffort-podda4cb543_c89a_4c22_8787_b25d9d6b8778.slice - libcontainer container kubepods-besteffort-podda4cb543_c89a_4c22_8787_b25d9d6b8778.slice. Jan 13 20:44:48.736538 containerd[1511]: time="2025-01-13T20:44:48.734978490Z" level=info msg="RemoveContainer for \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\" returns successfully" Jan 13 20:44:48.736668 kubelet[2752]: I0113 20:44:48.735890 2752 scope.go:117] "RemoveContainer" containerID="0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71" Jan 13 20:44:48.743533 containerd[1511]: time="2025-01-13T20:44:48.743496206Z" level=info msg="RemoveContainer for \"0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71\"" Jan 13 20:44:48.745769 containerd[1511]: time="2025-01-13T20:44:48.745726147Z" level=info msg="RemoveContainer for \"0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71\" returns successfully" Jan 13 20:44:48.746140 kubelet[2752]: I0113 20:44:48.746116 2752 scope.go:117] "RemoveContainer" containerID="ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae" Jan 13 20:44:48.749952 containerd[1511]: time="2025-01-13T20:44:48.749919457Z" level=info msg="RemoveContainer for \"ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae\"" Jan 13 20:44:48.751802 containerd[1511]: time="2025-01-13T20:44:48.751774761Z" level=info msg="RemoveContainer for \"ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae\" returns successfully" Jan 13 20:44:48.752014 kubelet[2752]: I0113 20:44:48.751995 2752 scope.go:117] "RemoveContainer" containerID="2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674" Jan 13 20:44:48.759816 containerd[1511]: time="2025-01-13T20:44:48.759765304Z" level=info msg="RemoveContainer for \"2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674\"" Jan 13 20:44:48.766538 containerd[1511]: time="2025-01-13T20:44:48.766480139Z" level=info msg="RemoveContainer for \"2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674\" returns successfully" Jan 13 20:44:48.766930 kubelet[2752]: I0113 20:44:48.766896 2752 scope.go:117] "RemoveContainer" containerID="c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703" Jan 13 20:44:48.768630 containerd[1511]: time="2025-01-13T20:44:48.768587157Z" level=info msg="RemoveContainer for \"c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703\"" Jan 13 20:44:48.770879 containerd[1511]: time="2025-01-13T20:44:48.770845700Z" level=info msg="RemoveContainer for \"c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703\" returns successfully" Jan 13 20:44:48.771366 kubelet[2752]: I0113 20:44:48.771023 2752 scope.go:117] "RemoveContainer" containerID="4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2" Jan 13 20:44:48.771474 containerd[1511]: time="2025-01-13T20:44:48.771421110Z" level=error msg="ContainerStatus for \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\": not found" Jan 13 20:44:48.775393 kubelet[2752]: E0113 20:44:48.775358 2752 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\": not found" containerID="4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2" Jan 13 20:44:48.778229 kubelet[2752]: I0113 20:44:48.777813 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2"} err="failed to get container status \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b53352f4c0eaceffb2c96bb9bffd73620f1aabfa47cf7675b830b7e151c7bd2\": not found" Jan 13 20:44:48.778229 kubelet[2752]: I0113 20:44:48.777870 2752 scope.go:117] "RemoveContainer" containerID="0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71" Jan 13 20:44:48.778229 kubelet[2752]: I0113 20:44:48.778067 2752 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-cni-path\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.778229 kubelet[2752]: I0113 20:44:48.778091 2752 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddb2c517-f113-4df6-a44a-3046960b02a0-hubble-tls\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.778229 kubelet[2752]: I0113 20:44:48.778109 2752 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-bpf-maps\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.778229 kubelet[2752]: I0113 20:44:48.778132 2752 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7s5dp\" (UniqueName: \"kubernetes.io/projected/ddb2c517-f113-4df6-a44a-3046960b02a0-kube-api-access-7s5dp\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.778229 kubelet[2752]: I0113 20:44:48.778151 2752 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-cilium-run\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.778643 kubelet[2752]: I0113 20:44:48.778169 2752 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-xtables-lock\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.778643 kubelet[2752]: I0113 20:44:48.778186 2752 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddb2c517-f113-4df6-a44a-3046960b02a0-etc-cni-netd\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.778643 kubelet[2752]: I0113 20:44:48.778203 2752 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddb2c517-f113-4df6-a44a-3046960b02a0-cilium-config-path\") on node \"srv-rxqun.gb1.brightbox.com\" DevicePath \"\"" Jan 13 20:44:48.779013 containerd[1511]: time="2025-01-13T20:44:48.778985290Z" level=error msg="ContainerStatus for \"0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71\": not found" Jan 13 20:44:48.779199 kubelet[2752]: E0113 20:44:48.779127 2752 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71\": not found" containerID="0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71" Jan 13 20:44:48.779199 kubelet[2752]: I0113 20:44:48.779155 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71"} err="failed to get container status \"0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71\": rpc error: code = NotFound desc = an error occurred when try to find container \"0631795d22d9b010026c38853f7d0fd09b15266c6e375c96fb58ad95ba19af71\": not found" Jan 13 20:44:48.779199 kubelet[2752]: I0113 20:44:48.779166 2752 scope.go:117] "RemoveContainer" containerID="ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae" Jan 13 20:44:48.779462 containerd[1511]: time="2025-01-13T20:44:48.779436782Z" level=error msg="ContainerStatus for \"ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae\": not found" Jan 13 20:44:48.779560 kubelet[2752]: E0113 20:44:48.779545 2752 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae\": not found" containerID="ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae" Jan 13 20:44:48.779595 kubelet[2752]: I0113 20:44:48.779580 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae"} err="failed to get container status \"ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae088a0e01a7df2bcb14d00124c949a4f49e4a00a329c28096977f771e200dae\": not found" Jan 13 20:44:48.779627 kubelet[2752]: I0113 20:44:48.779597 2752 scope.go:117] "RemoveContainer" containerID="2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674" Jan 13 20:44:48.779771 containerd[1511]: time="2025-01-13T20:44:48.779733491Z" level=error msg="ContainerStatus for \"2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674\": not found" Jan 13 20:44:48.779962 kubelet[2752]: E0113 20:44:48.779871 2752 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674\": not found" containerID="2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674" Jan 13 20:44:48.779962 kubelet[2752]: I0113 20:44:48.779896 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674"} err="failed to get container status \"2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b066cf1c4d15c93b8c49f1e4df4f5bc849bfde3838edbd848181f2e22617674\": not found" Jan 13 20:44:48.779962 kubelet[2752]: I0113 20:44:48.779905 2752 scope.go:117] "RemoveContainer" containerID="c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703" Jan 13 20:44:48.780129 containerd[1511]: time="2025-01-13T20:44:48.780091543Z" level=error msg="ContainerStatus for \"c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703\": not found" Jan 13 20:44:48.780252 kubelet[2752]: E0113 20:44:48.780242 2752 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703\": not found" containerID="c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703" Jan 13 20:44:48.780387 kubelet[2752]: I0113 20:44:48.780323 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703"} err="failed to get container status \"c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703\": rpc error: code = NotFound desc = an error occurred when try to find container \"c52eeb386a3712954cada4d7134181e53fe82cede40f35192f823ee5d1bfa703\": not found" Jan 13 20:44:48.780387 kubelet[2752]: I0113 20:44:48.780336 2752 scope.go:117] "RemoveContainer" containerID="66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63" Jan 13 20:44:48.781530 containerd[1511]: time="2025-01-13T20:44:48.781461339Z" level=info msg="RemoveContainer for \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\"" Jan 13 20:44:48.783588 containerd[1511]: time="2025-01-13T20:44:48.783556802Z" level=info msg="RemoveContainer for \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\" returns successfully" Jan 13 20:44:48.783829 kubelet[2752]: I0113 20:44:48.783724 2752 scope.go:117] "RemoveContainer" containerID="66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63" Jan 13 20:44:48.783936 containerd[1511]: time="2025-01-13T20:44:48.783907989Z" level=error msg="ContainerStatus for \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\": not found" Jan 13 20:44:48.784085 kubelet[2752]: E0113 20:44:48.784037 2752 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\": not found" containerID="66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63" Jan 13 20:44:48.784085 kubelet[2752]: I0113 20:44:48.784065 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63"} err="failed to get container status \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\": rpc error: code = NotFound desc = an error occurred when try to find container \"66c0baa2d0b740a8d88b2b648b92cc473964691770210d547864a6b2f0910d63\": not found" Jan 13 20:44:49.029784 kubelet[2752]: I0113 20:44:49.028656 2752 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="da4cb543-c89a-4c22-8787-b25d9d6b8778" path="/var/lib/kubelet/pods/da4cb543-c89a-4c22-8787-b25d9d6b8778/volumes" Jan 13 20:44:49.030034 kubelet[2752]: I0113 20:44:49.029995 2752 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ddb2c517-f113-4df6-a44a-3046960b02a0" path="/var/lib/kubelet/pods/ddb2c517-f113-4df6-a44a-3046960b02a0/volumes" Jan 13 20:44:49.334344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423-rootfs.mount: Deactivated successfully. Jan 13 20:44:49.334549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd-rootfs.mount: Deactivated successfully. Jan 13 20:44:49.334668 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423-shm.mount: Deactivated successfully. Jan 13 20:44:49.334848 systemd[1]: var-lib-kubelet-pods-ddb2c517\x2df113\x2d4df6\x2da44a\x2d3046960b02a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7s5dp.mount: Deactivated successfully. Jan 13 20:44:49.335004 systemd[1]: var-lib-kubelet-pods-da4cb543\x2dc89a\x2d4c22\x2d8787\x2db25d9d6b8778-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8zvkf.mount: Deactivated successfully. Jan 13 20:44:49.335130 systemd[1]: var-lib-kubelet-pods-ddb2c517\x2df113\x2d4df6\x2da44a\x2d3046960b02a0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:44:49.335263 systemd[1]: var-lib-kubelet-pods-ddb2c517\x2df113\x2d4df6\x2da44a\x2d3046960b02a0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:44:50.291587 sshd[4361]: Connection closed by 139.178.68.195 port 53536 Jan 13 20:44:50.292391 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:50.297744 systemd[1]: sshd@23-10.244.100.150:22-139.178.68.195:53536.service: Deactivated successfully. Jan 13 20:44:50.300665 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:44:50.302173 systemd-logind[1496]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:44:50.303695 systemd-logind[1496]: Removed session 26. Jan 13 20:44:50.459344 systemd[1]: Started sshd@24-10.244.100.150:22-139.178.68.195:53542.service - OpenSSH per-connection server daemon (139.178.68.195:53542). Jan 13 20:44:51.358976 sshd[4524]: Accepted publickey for core from 139.178.68.195 port 53542 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:51.362461 sshd-session[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:51.376058 systemd-logind[1496]: New session 27 of user core. Jan 13 20:44:51.381060 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:44:52.331129 kubelet[2752]: I0113 20:44:52.331075 2752 topology_manager.go:215] "Topology Admit Handler" podUID="3051114d-adf2-4fe6-9751-a799bcebbf88" podNamespace="kube-system" podName="cilium-5ncxm" Jan 13 20:44:52.334527 kubelet[2752]: E0113 20:44:52.334475 2752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddb2c517-f113-4df6-a44a-3046960b02a0" containerName="mount-cgroup" Jan 13 20:44:52.334527 kubelet[2752]: E0113 20:44:52.334521 2752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddb2c517-f113-4df6-a44a-3046960b02a0" containerName="apply-sysctl-overwrites" Jan 13 20:44:52.334527 kubelet[2752]: E0113 20:44:52.334531 2752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddb2c517-f113-4df6-a44a-3046960b02a0" containerName="mount-bpf-fs" Jan 13 20:44:52.334527 kubelet[2752]: E0113 20:44:52.334538 2752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddb2c517-f113-4df6-a44a-3046960b02a0" containerName="cilium-agent" Jan 13 20:44:52.334808 kubelet[2752]: E0113 20:44:52.334547 2752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddb2c517-f113-4df6-a44a-3046960b02a0" containerName="clean-cilium-state" Jan 13 20:44:52.334808 kubelet[2752]: E0113 20:44:52.334554 2752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da4cb543-c89a-4c22-8787-b25d9d6b8778" containerName="cilium-operator" Jan 13 20:44:52.334808 kubelet[2752]: I0113 20:44:52.334685 2752 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb2c517-f113-4df6-a44a-3046960b02a0" containerName="cilium-agent" Jan 13 20:44:52.334808 kubelet[2752]: I0113 20:44:52.334694 2752 memory_manager.go:354] "RemoveStaleState removing state" podUID="da4cb543-c89a-4c22-8787-b25d9d6b8778" containerName="cilium-operator" Jan 13 20:44:52.360636 systemd[1]: Created slice kubepods-burstable-pod3051114d_adf2_4fe6_9751_a799bcebbf88.slice - libcontainer container kubepods-burstable-pod3051114d_adf2_4fe6_9751_a799bcebbf88.slice. Jan 13 20:44:52.408705 kubelet[2752]: I0113 20:44:52.407199 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3051114d-adf2-4fe6-9751-a799bcebbf88-bpf-maps\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.408705 kubelet[2752]: I0113 20:44:52.407249 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3051114d-adf2-4fe6-9751-a799bcebbf88-cilium-ipsec-secrets\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.408705 kubelet[2752]: I0113 20:44:52.407275 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3051114d-adf2-4fe6-9751-a799bcebbf88-cilium-run\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.408705 kubelet[2752]: I0113 20:44:52.407295 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3051114d-adf2-4fe6-9751-a799bcebbf88-hostproc\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.408705 kubelet[2752]: I0113 20:44:52.407314 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3051114d-adf2-4fe6-9751-a799bcebbf88-lib-modules\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.408705 kubelet[2752]: I0113 20:44:52.407337 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3051114d-adf2-4fe6-9751-a799bcebbf88-host-proc-sys-net\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.409062 kubelet[2752]: I0113 20:44:52.407359 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3051114d-adf2-4fe6-9751-a799bcebbf88-xtables-lock\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.409062 kubelet[2752]: I0113 20:44:52.407379 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3051114d-adf2-4fe6-9751-a799bcebbf88-cni-path\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.409062 kubelet[2752]: I0113 20:44:52.407399 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3051114d-adf2-4fe6-9751-a799bcebbf88-host-proc-sys-kernel\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.409062 kubelet[2752]: I0113 20:44:52.407419 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3051114d-adf2-4fe6-9751-a799bcebbf88-cilium-config-path\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.409062 kubelet[2752]: I0113 20:44:52.407458 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x7w8\" (UniqueName: \"kubernetes.io/projected/3051114d-adf2-4fe6-9751-a799bcebbf88-kube-api-access-5x7w8\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.409203 kubelet[2752]: I0113 20:44:52.407484 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3051114d-adf2-4fe6-9751-a799bcebbf88-cilium-cgroup\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.409203 kubelet[2752]: I0113 20:44:52.407504 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3051114d-adf2-4fe6-9751-a799bcebbf88-hubble-tls\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.409203 kubelet[2752]: I0113 20:44:52.407526 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3051114d-adf2-4fe6-9751-a799bcebbf88-etc-cni-netd\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.409203 kubelet[2752]: I0113 20:44:52.407551 2752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3051114d-adf2-4fe6-9751-a799bcebbf88-clustermesh-secrets\") pod \"cilium-5ncxm\" (UID: \"3051114d-adf2-4fe6-9751-a799bcebbf88\") " pod="kube-system/cilium-5ncxm" Jan 13 20:44:52.487088 sshd[4526]: Connection closed by 139.178.68.195 port 53542 Jan 13 20:44:52.488186 sshd-session[4524]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:52.496262 systemd[1]: sshd@24-10.244.100.150:22-139.178.68.195:53542.service: Deactivated successfully. Jan 13 20:44:52.498689 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:44:52.499999 systemd-logind[1496]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:44:52.501563 systemd-logind[1496]: Removed session 27. Jan 13 20:44:52.645860 systemd[1]: Started sshd@25-10.244.100.150:22-139.178.68.195:53558.service - OpenSSH per-connection server daemon (139.178.68.195:53558). Jan 13 20:44:52.682626 containerd[1511]: time="2025-01-13T20:44:52.682539353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5ncxm,Uid:3051114d-adf2-4fe6-9751-a799bcebbf88,Namespace:kube-system,Attempt:0,}" Jan 13 20:44:52.709141 containerd[1511]: time="2025-01-13T20:44:52.708853501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:44:52.709141 containerd[1511]: time="2025-01-13T20:44:52.708933305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:44:52.709141 containerd[1511]: time="2025-01-13T20:44:52.708950448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:52.709141 containerd[1511]: time="2025-01-13T20:44:52.709044004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:52.735999 systemd[1]: Started cri-containerd-27c0c1bbd5d3f8db0eb819b51147d41d2508dacc8031158a06c65ac68d265f29.scope - libcontainer container 27c0c1bbd5d3f8db0eb819b51147d41d2508dacc8031158a06c65ac68d265f29. Jan 13 20:44:52.762850 containerd[1511]: time="2025-01-13T20:44:52.762806543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5ncxm,Uid:3051114d-adf2-4fe6-9751-a799bcebbf88,Namespace:kube-system,Attempt:0,} returns sandbox id \"27c0c1bbd5d3f8db0eb819b51147d41d2508dacc8031158a06c65ac68d265f29\"" Jan 13 20:44:52.768042 containerd[1511]: time="2025-01-13T20:44:52.767999040Z" level=info msg="CreateContainer within sandbox \"27c0c1bbd5d3f8db0eb819b51147d41d2508dacc8031158a06c65ac68d265f29\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:44:52.774678 containerd[1511]: time="2025-01-13T20:44:52.774634706Z" level=info msg="CreateContainer within sandbox \"27c0c1bbd5d3f8db0eb819b51147d41d2508dacc8031158a06c65ac68d265f29\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2f6f96ce7477a3f2e00d056581c21648c8a293015ab5447fa06235fdbaf87317\"" Jan 13 20:44:52.776118 containerd[1511]: time="2025-01-13T20:44:52.775324140Z" level=info msg="StartContainer for \"2f6f96ce7477a3f2e00d056581c21648c8a293015ab5447fa06235fdbaf87317\"" Jan 13 20:44:52.820186 systemd[1]: Started cri-containerd-2f6f96ce7477a3f2e00d056581c21648c8a293015ab5447fa06235fdbaf87317.scope - libcontainer container 2f6f96ce7477a3f2e00d056581c21648c8a293015ab5447fa06235fdbaf87317. Jan 13 20:44:52.852699 containerd[1511]: time="2025-01-13T20:44:52.852654883Z" level=info msg="StartContainer for \"2f6f96ce7477a3f2e00d056581c21648c8a293015ab5447fa06235fdbaf87317\" returns successfully" Jan 13 20:44:52.866370 systemd[1]: cri-containerd-2f6f96ce7477a3f2e00d056581c21648c8a293015ab5447fa06235fdbaf87317.scope: Deactivated successfully. Jan 13 20:44:52.898847 containerd[1511]: time="2025-01-13T20:44:52.898654090Z" level=info msg="shim disconnected" id=2f6f96ce7477a3f2e00d056581c21648c8a293015ab5447fa06235fdbaf87317 namespace=k8s.io Jan 13 20:44:52.898847 containerd[1511]: time="2025-01-13T20:44:52.898788732Z" level=warning msg="cleaning up after shim disconnected" id=2f6f96ce7477a3f2e00d056581c21648c8a293015ab5447fa06235fdbaf87317 namespace=k8s.io Jan 13 20:44:52.898847 containerd[1511]: time="2025-01-13T20:44:52.898805751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:44:53.204525 kubelet[2752]: E0113 20:44:53.204364 2752 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:44:53.536523 sshd[4540]: Accepted publickey for core from 139.178.68.195 port 53558 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:53.538569 sshd-session[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:53.545053 systemd-logind[1496]: New session 28 of user core. Jan 13 20:44:53.549940 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:44:53.742180 containerd[1511]: time="2025-01-13T20:44:53.741993160Z" level=info msg="CreateContainer within sandbox \"27c0c1bbd5d3f8db0eb819b51147d41d2508dacc8031158a06c65ac68d265f29\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:44:53.750082 containerd[1511]: time="2025-01-13T20:44:53.748926534Z" level=info msg="CreateContainer within sandbox \"27c0c1bbd5d3f8db0eb819b51147d41d2508dacc8031158a06c65ac68d265f29\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"237a271c6cd8c77d91d7cfbdacaaae91e1ea8a7bde24151f04ce279c157e5697\"" Jan 13 20:44:53.750082 containerd[1511]: time="2025-01-13T20:44:53.749322801Z" level=info msg="StartContainer for \"237a271c6cd8c77d91d7cfbdacaaae91e1ea8a7bde24151f04ce279c157e5697\"" Jan 13 20:44:53.801530 systemd[1]: Started cri-containerd-237a271c6cd8c77d91d7cfbdacaaae91e1ea8a7bde24151f04ce279c157e5697.scope - libcontainer container 237a271c6cd8c77d91d7cfbdacaaae91e1ea8a7bde24151f04ce279c157e5697. Jan 13 20:44:53.835539 containerd[1511]: time="2025-01-13T20:44:53.835503085Z" level=info msg="StartContainer for \"237a271c6cd8c77d91d7cfbdacaaae91e1ea8a7bde24151f04ce279c157e5697\" returns successfully" Jan 13 20:44:53.845508 systemd[1]: cri-containerd-237a271c6cd8c77d91d7cfbdacaaae91e1ea8a7bde24151f04ce279c157e5697.scope: Deactivated successfully. Jan 13 20:44:53.879872 containerd[1511]: time="2025-01-13T20:44:53.879795471Z" level=info msg="shim disconnected" id=237a271c6cd8c77d91d7cfbdacaaae91e1ea8a7bde24151f04ce279c157e5697 namespace=k8s.io Jan 13 20:44:53.879872 containerd[1511]: time="2025-01-13T20:44:53.879853393Z" level=warning msg="cleaning up after shim disconnected" id=237a271c6cd8c77d91d7cfbdacaaae91e1ea8a7bde24151f04ce279c157e5697 namespace=k8s.io Jan 13 20:44:53.879872 containerd[1511]: time="2025-01-13T20:44:53.879863137Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:44:54.159928 sshd[4646]: Connection closed by 139.178.68.195 port 53558 Jan 13 20:44:54.161562 sshd-session[4540]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:54.170372 systemd[1]: sshd@25-10.244.100.150:22-139.178.68.195:53558.service: Deactivated successfully. Jan 13 20:44:54.176647 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:44:54.178220 systemd-logind[1496]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:44:54.179784 systemd-logind[1496]: Removed session 28. Jan 13 20:44:54.315936 systemd[1]: Started sshd@26-10.244.100.150:22-139.178.68.195:53562.service - OpenSSH per-connection server daemon (139.178.68.195:53562). Jan 13 20:44:54.523902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-237a271c6cd8c77d91d7cfbdacaaae91e1ea8a7bde24151f04ce279c157e5697-rootfs.mount: Deactivated successfully. Jan 13 20:44:54.749343 containerd[1511]: time="2025-01-13T20:44:54.749205650Z" level=info msg="CreateContainer within sandbox \"27c0c1bbd5d3f8db0eb819b51147d41d2508dacc8031158a06c65ac68d265f29\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:44:54.777630 containerd[1511]: time="2025-01-13T20:44:54.777285651Z" level=info msg="CreateContainer within sandbox \"27c0c1bbd5d3f8db0eb819b51147d41d2508dacc8031158a06c65ac68d265f29\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9a2025fa86efe0e643be1c33662fee4ec43aa0003b821c2661ff511eaa08612a\"" Jan 13 20:44:54.778295 containerd[1511]: time="2025-01-13T20:44:54.777988085Z" level=info msg="StartContainer for \"9a2025fa86efe0e643be1c33662fee4ec43aa0003b821c2661ff511eaa08612a\"" Jan 13 20:44:54.815937 systemd[1]: Started cri-containerd-9a2025fa86efe0e643be1c33662fee4ec43aa0003b821c2661ff511eaa08612a.scope - libcontainer container 9a2025fa86efe0e643be1c33662fee4ec43aa0003b821c2661ff511eaa08612a. Jan 13 20:44:54.848150 containerd[1511]: time="2025-01-13T20:44:54.847981554Z" level=info msg="StartContainer for \"9a2025fa86efe0e643be1c33662fee4ec43aa0003b821c2661ff511eaa08612a\" returns successfully" Jan 13 20:44:54.857796 systemd[1]: cri-containerd-9a2025fa86efe0e643be1c33662fee4ec43aa0003b821c2661ff511eaa08612a.scope: Deactivated successfully. Jan 13 20:44:54.885411 containerd[1511]: time="2025-01-13T20:44:54.885066000Z" level=info msg="shim disconnected" id=9a2025fa86efe0e643be1c33662fee4ec43aa0003b821c2661ff511eaa08612a namespace=k8s.io Jan 13 20:44:54.885411 containerd[1511]: time="2025-01-13T20:44:54.885197702Z" level=warning msg="cleaning up after shim disconnected" id=9a2025fa86efe0e643be1c33662fee4ec43aa0003b821c2661ff511eaa08612a namespace=k8s.io Jan 13 20:44:54.885411 containerd[1511]: time="2025-01-13T20:44:54.885206906Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:44:55.228024 sshd[4714]: Accepted publickey for core from 139.178.68.195 port 53562 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:44:55.230542 sshd-session[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:55.238429 systemd-logind[1496]: New session 29 of user core. Jan 13 20:44:55.250274 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:44:55.523581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a2025fa86efe0e643be1c33662fee4ec43aa0003b821c2661ff511eaa08612a-rootfs.mount: Deactivated successfully. Jan 13 20:44:55.758435 containerd[1511]: time="2025-01-13T20:44:55.758357285Z" level=info msg="CreateContainer within sandbox \"27c0c1bbd5d3f8db0eb819b51147d41d2508dacc8031158a06c65ac68d265f29\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:44:55.771274 containerd[1511]: time="2025-01-13T20:44:55.771234752Z" level=info msg="CreateContainer within sandbox \"27c0c1bbd5d3f8db0eb819b51147d41d2508dacc8031158a06c65ac68d265f29\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4dbf7cc5ec9292362c63b1f5fd05a7934f47fe88459b4cb0243684b4ca9f9244\"" Jan 13 20:44:55.772972 containerd[1511]: time="2025-01-13T20:44:55.772946203Z" level=info msg="StartContainer for \"4dbf7cc5ec9292362c63b1f5fd05a7934f47fe88459b4cb0243684b4ca9f9244\"" Jan 13 20:44:55.820101 systemd[1]: Started cri-containerd-4dbf7cc5ec9292362c63b1f5fd05a7934f47fe88459b4cb0243684b4ca9f9244.scope - libcontainer container 4dbf7cc5ec9292362c63b1f5fd05a7934f47fe88459b4cb0243684b4ca9f9244. Jan 13 20:44:55.852052 systemd[1]: cri-containerd-4dbf7cc5ec9292362c63b1f5fd05a7934f47fe88459b4cb0243684b4ca9f9244.scope: Deactivated successfully. Jan 13 20:44:55.853022 containerd[1511]: time="2025-01-13T20:44:55.852512405Z" level=info msg="StartContainer for \"4dbf7cc5ec9292362c63b1f5fd05a7934f47fe88459b4cb0243684b4ca9f9244\" returns successfully" Jan 13 20:44:55.874708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dbf7cc5ec9292362c63b1f5fd05a7934f47fe88459b4cb0243684b4ca9f9244-rootfs.mount: Deactivated successfully. Jan 13 20:44:55.877381 containerd[1511]: time="2025-01-13T20:44:55.877311520Z" level=info msg="shim disconnected" id=4dbf7cc5ec9292362c63b1f5fd05a7934f47fe88459b4cb0243684b4ca9f9244 namespace=k8s.io Jan 13 20:44:55.877524 containerd[1511]: time="2025-01-13T20:44:55.877381960Z" level=warning msg="cleaning up after shim disconnected" id=4dbf7cc5ec9292362c63b1f5fd05a7934f47fe88459b4cb0243684b4ca9f9244 namespace=k8s.io Jan 13 20:44:55.877524 containerd[1511]: time="2025-01-13T20:44:55.877391924Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:44:56.767609 containerd[1511]: time="2025-01-13T20:44:56.767535712Z" level=info msg="CreateContainer within sandbox \"27c0c1bbd5d3f8db0eb819b51147d41d2508dacc8031158a06c65ac68d265f29\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:44:56.783663 containerd[1511]: time="2025-01-13T20:44:56.783540238Z" level=info msg="CreateContainer within sandbox \"27c0c1bbd5d3f8db0eb819b51147d41d2508dacc8031158a06c65ac68d265f29\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"116b881abf29bd53446c5faa4c9a118728d30443e49a4090096bfb7e271b4840\"" Jan 13 20:44:56.786376 containerd[1511]: time="2025-01-13T20:44:56.785197597Z" level=info msg="StartContainer for \"116b881abf29bd53446c5faa4c9a118728d30443e49a4090096bfb7e271b4840\"" Jan 13 20:44:56.786660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1423204138.mount: Deactivated successfully. Jan 13 20:44:56.841923 systemd[1]: Started cri-containerd-116b881abf29bd53446c5faa4c9a118728d30443e49a4090096bfb7e271b4840.scope - libcontainer container 116b881abf29bd53446c5faa4c9a118728d30443e49a4090096bfb7e271b4840. Jan 13 20:44:56.884831 containerd[1511]: time="2025-01-13T20:44:56.884732247Z" level=info msg="StartContainer for \"116b881abf29bd53446c5faa4c9a118728d30443e49a4090096bfb7e271b4840\" returns successfully" Jan 13 20:44:57.218800 kubelet[2752]: I0113 20:44:57.216879 2752 setters.go:568] "Node became not ready" node="srv-rxqun.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:44:57Z","lastTransitionTime":"2025-01-13T20:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:44:57.398866 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 20:44:57.796267 kubelet[2752]: I0113 20:44:57.795860 2752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5ncxm" podStartSLOduration=5.795801988 podStartE2EDuration="5.795801988s" podCreationTimestamp="2025-01-13 20:44:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:44:57.794626304 +0000 UTC m=+174.960396143" watchObservedRunningTime="2025-01-13 20:44:57.795801988 +0000 UTC m=+174.961571867" Jan 13 20:45:00.313662 systemd[1]: run-containerd-runc-k8s.io-116b881abf29bd53446c5faa4c9a118728d30443e49a4090096bfb7e271b4840-runc.CKomJY.mount: Deactivated successfully. Jan 13 20:45:00.546139 systemd-networkd[1443]: lxc_health: Link UP Jan 13 20:45:00.555845 systemd-networkd[1443]: lxc_health: Gained carrier Jan 13 20:45:01.809919 systemd-networkd[1443]: lxc_health: Gained IPv6LL Jan 13 20:45:03.034426 containerd[1511]: time="2025-01-13T20:45:03.034254403Z" level=info msg="StopPodSandbox for \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\"" Jan 13 20:45:03.035739 containerd[1511]: time="2025-01-13T20:45:03.035037600Z" level=info msg="TearDown network for sandbox \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\" successfully" Jan 13 20:45:03.035739 containerd[1511]: time="2025-01-13T20:45:03.035060550Z" level=info msg="StopPodSandbox for \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\" returns successfully" Jan 13 20:45:03.035739 containerd[1511]: time="2025-01-13T20:45:03.035547614Z" level=info msg="RemovePodSandbox for \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\"" Jan 13 20:45:03.035739 containerd[1511]: time="2025-01-13T20:45:03.035597344Z" level=info msg="Forcibly stopping sandbox \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\"" Jan 13 20:45:03.036093 containerd[1511]: time="2025-01-13T20:45:03.035814364Z" level=info msg="TearDown network for sandbox \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\" successfully" Jan 13 20:45:03.041498 containerd[1511]: time="2025-01-13T20:45:03.040791212Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:45:03.041498 containerd[1511]: time="2025-01-13T20:45:03.040883787Z" level=info msg="RemovePodSandbox \"7adee568ea1b2b8661cd96feae87a338626091cc6fcd96c290585093dac66abd\" returns successfully" Jan 13 20:45:03.041498 containerd[1511]: time="2025-01-13T20:45:03.041358831Z" level=info msg="StopPodSandbox for \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\"" Jan 13 20:45:03.041498 containerd[1511]: time="2025-01-13T20:45:03.041441320Z" level=info msg="TearDown network for sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" successfully" Jan 13 20:45:03.041498 containerd[1511]: time="2025-01-13T20:45:03.041454351Z" level=info msg="StopPodSandbox for \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" returns successfully" Jan 13 20:45:03.042495 containerd[1511]: time="2025-01-13T20:45:03.042261830Z" level=info msg="RemovePodSandbox for \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\"" Jan 13 20:45:03.042495 containerd[1511]: time="2025-01-13T20:45:03.042346980Z" level=info msg="Forcibly stopping sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\"" Jan 13 20:45:03.042495 containerd[1511]: time="2025-01-13T20:45:03.042408579Z" level=info msg="TearDown network for sandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" successfully" Jan 13 20:45:03.044713 containerd[1511]: time="2025-01-13T20:45:03.044653357Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:45:03.045054 containerd[1511]: time="2025-01-13T20:45:03.044881809Z" level=info msg="RemovePodSandbox \"f58fd3592ee2c551b4c6a1205cfc8f91793a96f2e0b717dab1cff32f7f835423\" returns successfully" Jan 13 20:45:07.200008 sshd[4773]: Connection closed by 139.178.68.195 port 53562 Jan 13 20:45:07.201856 sshd-session[4714]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:07.208972 systemd[1]: sshd@26-10.244.100.150:22-139.178.68.195:53562.service: Deactivated successfully. Jan 13 20:45:07.211975 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:45:07.213602 systemd-logind[1496]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:45:07.215401 systemd-logind[1496]: Removed session 29.