Feb 14 00:20:31.062882 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 14 00:20:31.062917 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 14 00:20:31.062932 kernel: BIOS-provided physical RAM map: Feb 14 00:20:31.062949 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 14 00:20:31.062958 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 14 00:20:31.062968 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 14 00:20:31.062980 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 14 00:20:31.062990 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 14 00:20:31.063000 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 14 00:20:31.063010 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 14 00:20:31.063020 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 14 00:20:31.063030 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 14 00:20:31.063045 kernel: NX (Execute Disable) protection: active Feb 14 00:20:31.063056 kernel: APIC: Static calls initialized Feb 14 00:20:31.063068 kernel: SMBIOS 2.8 present. Feb 14 00:20:31.063079 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 14 00:20:31.063091 kernel: Hypervisor detected: KVM Feb 14 00:20:31.063106 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 14 00:20:31.063118 kernel: kvm-clock: using sched offset of 4341988708 cycles Feb 14 00:20:31.063130 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 14 00:20:31.063141 kernel: tsc: Detected 2499.998 MHz processor Feb 14 00:20:31.063153 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 14 00:20:31.063164 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 14 00:20:31.063175 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 14 00:20:31.063186 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 14 00:20:31.063264 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 14 00:20:31.063286 kernel: Using GB pages for direct mapping Feb 14 00:20:31.063337 kernel: ACPI: Early table checksum verification disabled Feb 14 00:20:31.063373 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 14 00:20:31.063385 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:20:31.063396 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:20:31.063407 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:20:31.063418 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 14 00:20:31.063430 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:20:31.063441 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:20:31.063459 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:20:31.063471 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 00:20:31.063503 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 14 00:20:31.063521 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 14 00:20:31.063532 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 14 00:20:31.063552 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 14 00:20:31.063564 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 14 00:20:31.063607 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 14 00:20:31.063620 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 14 00:20:31.063631 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 14 00:20:31.063643 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 14 00:20:31.063655 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 14 00:20:31.063666 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 14 00:20:31.063703 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 14 00:20:31.063750 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 14 00:20:31.063763 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 14 00:20:31.063827 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 14 00:20:31.063839 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 14 00:20:31.063851 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 14 00:20:31.063863 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 14 00:20:31.063874 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 14 00:20:31.063886 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 14 00:20:31.063897 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 14 00:20:31.063909 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 14 00:20:31.063928 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 14 00:20:31.063940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 14 00:20:31.063952 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 14 00:20:31.063987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 14 00:20:31.064000 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 14 00:20:31.064012 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 14 00:20:31.064024 kernel: Zone ranges: Feb 14 00:20:31.064059 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 14 00:20:31.064072 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 14 00:20:31.064090 kernel: Normal empty Feb 14 00:20:31.064102 kernel: Movable zone start for each node Feb 14 00:20:31.064114 kernel: Early memory node ranges Feb 14 00:20:31.064125 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 14 00:20:31.064137 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 14 00:20:31.064173 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 14 00:20:31.064185 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 14 00:20:31.064197 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 14 00:20:31.064208 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 14 00:20:31.064220 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 14 00:20:31.064265 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 14 00:20:31.064277 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 14 00:20:31.064289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 14 00:20:31.064301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 14 00:20:31.064313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 14 00:20:31.064324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 14 00:20:31.064336 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 14 00:20:31.066556 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 14 00:20:31.066572 kernel: TSC deadline timer available Feb 14 00:20:31.066598 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 14 00:20:31.066610 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 14 00:20:31.066622 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 14 00:20:31.066634 kernel: Booting paravirtualized kernel on KVM Feb 14 00:20:31.066646 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 14 00:20:31.066658 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 14 00:20:31.066670 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 14 00:20:31.066693 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 14 00:20:31.066706 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 14 00:20:31.066723 kernel: kvm-guest: PV spinlocks enabled Feb 14 00:20:31.066736 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 14 00:20:31.066749 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 14 00:20:31.066762 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 14 00:20:31.066773 kernel: random: crng init done Feb 14 00:20:31.066785 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 00:20:31.066797 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 14 00:20:31.066809 kernel: Fallback order for Node 0: 0 Feb 14 00:20:31.066826 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 14 00:20:31.066838 kernel: Policy zone: DMA32 Feb 14 00:20:31.066849 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 14 00:20:31.066861 kernel: software IO TLB: area num 16. Feb 14 00:20:31.066874 kernel: Memory: 1901524K/2096616K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 194832K reserved, 0K cma-reserved) Feb 14 00:20:31.066886 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 14 00:20:31.066898 kernel: Kernel/User page tables isolation: enabled Feb 14 00:20:31.066909 kernel: ftrace: allocating 37921 entries in 149 pages Feb 14 00:20:31.066921 kernel: ftrace: allocated 149 pages with 4 groups Feb 14 00:20:31.066938 kernel: Dynamic Preempt: voluntary Feb 14 00:20:31.066950 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 14 00:20:31.066968 kernel: rcu: RCU event tracing is enabled. Feb 14 00:20:31.066981 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 14 00:20:31.066993 kernel: Trampoline variant of Tasks RCU enabled. Feb 14 00:20:31.067017 kernel: Rude variant of Tasks RCU enabled. Feb 14 00:20:31.067034 kernel: Tracing variant of Tasks RCU enabled. Feb 14 00:20:31.067047 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 14 00:20:31.067059 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 14 00:20:31.067071 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 14 00:20:31.067084 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 14 00:20:31.067101 kernel: Console: colour VGA+ 80x25 Feb 14 00:20:31.067114 kernel: printk: console [tty0] enabled Feb 14 00:20:31.067126 kernel: printk: console [ttyS0] enabled Feb 14 00:20:31.067139 kernel: ACPI: Core revision 20230628 Feb 14 00:20:31.067151 kernel: APIC: Switch to symmetric I/O mode setup Feb 14 00:20:31.067163 kernel: x2apic enabled Feb 14 00:20:31.067181 kernel: APIC: Switched APIC routing to: physical x2apic Feb 14 00:20:31.067193 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 14 00:20:31.067206 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Feb 14 00:20:31.067218 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 14 00:20:31.067231 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 14 00:20:31.067244 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 14 00:20:31.067256 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 14 00:20:31.067268 kernel: Spectre V2 : Mitigation: Retpolines Feb 14 00:20:31.067281 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 14 00:20:31.067298 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 14 00:20:31.067310 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 14 00:20:31.067323 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 14 00:20:31.067335 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 14 00:20:31.067369 kernel: MDS: Mitigation: Clear CPU buffers Feb 14 00:20:31.067383 kernel: MMIO Stale Data: Unknown: No mitigations Feb 14 00:20:31.067395 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 14 00:20:31.067407 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 14 00:20:31.067420 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 14 00:20:31.067432 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 14 00:20:31.067445 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 14 00:20:31.067464 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 14 00:20:31.067476 kernel: Freeing SMP alternatives memory: 32K Feb 14 00:20:31.067489 kernel: pid_max: default: 32768 minimum: 301 Feb 14 00:20:31.067501 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 14 00:20:31.067514 kernel: landlock: Up and running. Feb 14 00:20:31.067526 kernel: SELinux: Initializing. Feb 14 00:20:31.067538 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 14 00:20:31.067551 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 14 00:20:31.067563 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Feb 14 00:20:31.067576 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 14 00:20:31.067588 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 14 00:20:31.067606 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 14 00:20:31.067619 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Feb 14 00:20:31.067631 kernel: signal: max sigframe size: 1776 Feb 14 00:20:31.067644 kernel: rcu: Hierarchical SRCU implementation. Feb 14 00:20:31.067657 kernel: rcu: Max phase no-delay instances is 400. Feb 14 00:20:31.067669 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 14 00:20:31.067691 kernel: smp: Bringing up secondary CPUs ... Feb 14 00:20:31.067704 kernel: smpboot: x86: Booting SMP configuration: Feb 14 00:20:31.067717 kernel: .... node #0, CPUs: #1 Feb 14 00:20:31.067735 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 14 00:20:31.067748 kernel: smp: Brought up 1 node, 2 CPUs Feb 14 00:20:31.067760 kernel: smpboot: Max logical packages: 16 Feb 14 00:20:31.067772 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Feb 14 00:20:31.067785 kernel: devtmpfs: initialized Feb 14 00:20:31.067797 kernel: x86/mm: Memory block size: 128MB Feb 14 00:20:31.067810 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 14 00:20:31.067822 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 14 00:20:31.067834 kernel: pinctrl core: initialized pinctrl subsystem Feb 14 00:20:31.067852 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 14 00:20:31.067864 kernel: audit: initializing netlink subsys (disabled) Feb 14 00:20:31.067877 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 14 00:20:31.067889 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 14 00:20:31.067902 kernel: audit: type=2000 audit(1739492429.231:1): state=initialized audit_enabled=0 res=1 Feb 14 00:20:31.067914 kernel: cpuidle: using governor menu Feb 14 00:20:31.067926 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 14 00:20:31.067938 kernel: dca service started, version 1.12.1 Feb 14 00:20:31.067951 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 14 00:20:31.067968 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 14 00:20:31.067981 kernel: PCI: Using configuration type 1 for base access Feb 14 00:20:31.067993 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 14 00:20:31.068006 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 14 00:20:31.068018 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 14 00:20:31.068031 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 14 00:20:31.068043 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 14 00:20:31.068055 kernel: ACPI: Added _OSI(Module Device) Feb 14 00:20:31.068068 kernel: ACPI: Added _OSI(Processor Device) Feb 14 00:20:31.068085 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 14 00:20:31.068097 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 14 00:20:31.068110 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 14 00:20:31.068133 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 14 00:20:31.068146 kernel: ACPI: Interpreter enabled Feb 14 00:20:31.068158 kernel: ACPI: PM: (supports S0 S5) Feb 14 00:20:31.068170 kernel: ACPI: Using IOAPIC for interrupt routing Feb 14 00:20:31.068182 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 14 00:20:31.068195 kernel: PCI: Using E820 reservations for host bridge windows Feb 14 00:20:31.068213 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 14 00:20:31.068226 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 14 00:20:31.070630 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:20:31.070836 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 14 00:20:31.071007 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 14 00:20:31.071028 kernel: PCI host bridge to bus 0000:00 Feb 14 00:20:31.071228 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 14 00:20:31.071428 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 14 00:20:31.071585 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 14 00:20:31.071760 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 14 00:20:31.071923 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 14 00:20:31.072085 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 14 00:20:31.072271 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 14 00:20:31.074716 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 14 00:20:31.074964 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 14 00:20:31.075176 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 14 00:20:31.075390 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 14 00:20:31.075557 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 14 00:20:31.075761 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 14 00:20:31.075966 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 14 00:20:31.076144 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 14 00:20:31.076333 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 14 00:20:31.079669 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 14 00:20:31.079871 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 14 00:20:31.080038 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 14 00:20:31.080211 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 14 00:20:31.080420 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 14 00:20:31.080620 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 14 00:20:31.080799 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 14 00:20:31.080986 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 14 00:20:31.081152 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 14 00:20:31.082379 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 14 00:20:31.082571 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 14 00:20:31.082786 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 14 00:20:31.082957 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 14 00:20:31.083158 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 14 00:20:31.083328 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 14 00:20:31.083535 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 14 00:20:31.083715 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 14 00:20:31.083904 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 14 00:20:31.084110 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 14 00:20:31.084289 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 14 00:20:31.086508 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 14 00:20:31.086695 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 14 00:20:31.086901 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 14 00:20:31.087081 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 14 00:20:31.087287 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 14 00:20:31.087481 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 14 00:20:31.087643 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 14 00:20:31.087857 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 14 00:20:31.088020 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 14 00:20:31.088228 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 14 00:20:31.088542 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 14 00:20:31.088729 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 14 00:20:31.088892 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 14 00:20:31.089055 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 14 00:20:31.089252 kernel: pci_bus 0000:02: extended config space not accessible Feb 14 00:20:31.089483 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 14 00:20:31.089673 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 14 00:20:31.089859 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 14 00:20:31.090041 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 14 00:20:31.090249 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 14 00:20:31.090441 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 14 00:20:31.090609 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 14 00:20:31.090790 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 14 00:20:31.090965 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 14 00:20:31.091174 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 14 00:20:31.091376 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 14 00:20:31.091558 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 14 00:20:31.091740 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 14 00:20:31.091916 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 14 00:20:31.092096 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 14 00:20:31.092275 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 14 00:20:31.092499 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 14 00:20:31.092665 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 14 00:20:31.092839 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 14 00:20:31.092998 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 14 00:20:31.093164 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 14 00:20:31.093329 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 14 00:20:31.093567 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 14 00:20:31.093745 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 14 00:20:31.093915 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 14 00:20:31.094074 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 14 00:20:31.094237 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 14 00:20:31.094413 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 14 00:20:31.094573 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 14 00:20:31.094593 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 14 00:20:31.094607 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 14 00:20:31.094620 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 14 00:20:31.094640 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 14 00:20:31.094653 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 14 00:20:31.094666 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 14 00:20:31.094688 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 14 00:20:31.094703 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 14 00:20:31.094716 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 14 00:20:31.094729 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 14 00:20:31.094741 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 14 00:20:31.094754 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 14 00:20:31.094773 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 14 00:20:31.094786 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 14 00:20:31.094798 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 14 00:20:31.094811 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 14 00:20:31.094823 kernel: iommu: Default domain type: Translated Feb 14 00:20:31.094836 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 14 00:20:31.094848 kernel: PCI: Using ACPI for IRQ routing Feb 14 00:20:31.094861 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 14 00:20:31.094873 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 14 00:20:31.094891 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 14 00:20:31.095052 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 14 00:20:31.095227 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 14 00:20:31.095432 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 14 00:20:31.095453 kernel: vgaarb: loaded Feb 14 00:20:31.095466 kernel: clocksource: Switched to clocksource kvm-clock Feb 14 00:20:31.095479 kernel: VFS: Disk quotas dquot_6.6.0 Feb 14 00:20:31.095511 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 14 00:20:31.095533 kernel: pnp: PnP ACPI init Feb 14 00:20:31.095742 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 14 00:20:31.095764 kernel: pnp: PnP ACPI: found 5 devices Feb 14 00:20:31.095777 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 14 00:20:31.095790 kernel: NET: Registered PF_INET protocol family Feb 14 00:20:31.095803 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 14 00:20:31.095816 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 14 00:20:31.095829 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 14 00:20:31.095842 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 14 00:20:31.095862 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 14 00:20:31.095875 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 14 00:20:31.095887 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 14 00:20:31.095900 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 14 00:20:31.095913 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 14 00:20:31.095926 kernel: NET: Registered PF_XDP protocol family Feb 14 00:20:31.096083 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 14 00:20:31.096244 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:20:31.096438 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:20:31.096599 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 14 00:20:31.096777 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 14 00:20:31.096940 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 14 00:20:31.097102 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 14 00:20:31.097264 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 14 00:20:31.097452 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 14 00:20:31.097616 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 14 00:20:31.097831 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 14 00:20:31.097996 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 14 00:20:31.098157 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 14 00:20:31.098318 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 14 00:20:31.098496 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 14 00:20:31.098668 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 14 00:20:31.098885 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 14 00:20:31.099061 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 14 00:20:31.099225 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 14 00:20:31.099404 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 14 00:20:31.099567 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 14 00:20:31.099799 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 14 00:20:31.099985 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 14 00:20:31.100151 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 14 00:20:31.100326 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 14 00:20:31.100516 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 14 00:20:31.100705 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 14 00:20:31.100884 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 14 00:20:31.101051 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 14 00:20:31.101235 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 14 00:20:31.101427 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 14 00:20:31.101592 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 14 00:20:31.101778 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 14 00:20:31.101955 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 14 00:20:31.102121 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 14 00:20:31.102289 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 14 00:20:31.102483 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 14 00:20:31.102661 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 14 00:20:31.102899 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 14 00:20:31.103077 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 14 00:20:31.103244 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 14 00:20:31.103563 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 14 00:20:31.103750 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 14 00:20:31.103913 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 14 00:20:31.104083 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 14 00:20:31.104244 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 14 00:20:31.104422 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 14 00:20:31.104584 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 14 00:20:31.104764 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 14 00:20:31.104924 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 14 00:20:31.105079 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 14 00:20:31.105230 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 14 00:20:31.105420 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 14 00:20:31.105567 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 14 00:20:31.105728 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 14 00:20:31.105890 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 14 00:20:31.106085 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 14 00:20:31.106253 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 14 00:20:31.106455 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 14 00:20:31.106632 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 14 00:20:31.106822 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 14 00:20:31.106979 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 14 00:20:31.107132 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 14 00:20:31.107315 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 14 00:20:31.107586 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 14 00:20:31.107793 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 14 00:20:31.107968 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 14 00:20:31.108122 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 14 00:20:31.108273 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 14 00:20:31.108572 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 14 00:20:31.108752 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 14 00:20:31.108936 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 14 00:20:31.109118 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 14 00:20:31.109315 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 14 00:20:31.109496 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 14 00:20:31.109672 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 14 00:20:31.109844 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 14 00:20:31.109997 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 14 00:20:31.110213 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 14 00:20:31.110389 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 14 00:20:31.110587 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 14 00:20:31.110609 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 14 00:20:31.110623 kernel: PCI: CLS 0 bytes, default 64 Feb 14 00:20:31.110637 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 14 00:20:31.110650 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 14 00:20:31.110664 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 14 00:20:31.110678 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 14 00:20:31.110704 kernel: Initialise system trusted keyrings Feb 14 00:20:31.110725 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 14 00:20:31.110739 kernel: Key type asymmetric registered Feb 14 00:20:31.110752 kernel: Asymmetric key parser 'x509' registered Feb 14 00:20:31.110765 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 14 00:20:31.110778 kernel: io scheduler mq-deadline registered Feb 14 00:20:31.110791 kernel: io scheduler kyber registered Feb 14 00:20:31.110804 kernel: io scheduler bfq registered Feb 14 00:20:31.110970 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 14 00:20:31.111138 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 14 00:20:31.111311 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:20:31.111522 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 14 00:20:31.111696 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 14 00:20:31.111859 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:20:31.112022 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 14 00:20:31.112185 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 14 00:20:31.112422 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:20:31.112590 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 14 00:20:31.112766 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 14 00:20:31.112927 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:20:31.113092 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 14 00:20:31.113253 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 14 00:20:31.113441 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:20:31.113602 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 14 00:20:31.113809 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 14 00:20:31.113971 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:20:31.114135 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 14 00:20:31.114297 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 14 00:20:31.114509 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:20:31.114672 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 14 00:20:31.114847 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 14 00:20:31.115008 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 14 00:20:31.115030 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 14 00:20:31.115044 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 14 00:20:31.115066 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 14 00:20:31.115079 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 14 00:20:31.115098 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 14 00:20:31.115112 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 14 00:20:31.115125 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 14 00:20:31.115139 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 14 00:20:31.115152 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 14 00:20:31.115328 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 14 00:20:31.115511 kernel: rtc_cmos 00:03: registered as rtc0 Feb 14 00:20:31.115666 kernel: rtc_cmos 00:03: setting system clock to 2025-02-14T00:20:30 UTC (1739492430) Feb 14 00:20:31.115832 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 14 00:20:31.115852 kernel: intel_pstate: CPU model not supported Feb 14 00:20:31.115866 kernel: NET: Registered PF_INET6 protocol family Feb 14 00:20:31.115879 kernel: Segment Routing with IPv6 Feb 14 00:20:31.115892 kernel: In-situ OAM (IOAM) with IPv6 Feb 14 00:20:31.115906 kernel: NET: Registered PF_PACKET protocol family Feb 14 00:20:31.115920 kernel: Key type dns_resolver registered Feb 14 00:20:31.115940 kernel: IPI shorthand broadcast: enabled Feb 14 00:20:31.115954 kernel: sched_clock: Marking stable (1319004201, 245405335)->(1692781352, -128371816) Feb 14 00:20:31.115967 kernel: registered taskstats version 1 Feb 14 00:20:31.115980 kernel: Loading compiled-in X.509 certificates Feb 14 00:20:31.115994 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 14 00:20:31.116007 kernel: Key type .fscrypt registered Feb 14 00:20:31.116020 kernel: Key type fscrypt-provisioning registered Feb 14 00:20:31.116034 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 14 00:20:31.116052 kernel: ima: Allocated hash algorithm: sha1 Feb 14 00:20:31.116065 kernel: ima: No architecture policies found Feb 14 00:20:31.116078 kernel: clk: Disabling unused clocks Feb 14 00:20:31.116092 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 14 00:20:31.116105 kernel: Write protecting the kernel read-only data: 36864k Feb 14 00:20:31.116119 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 14 00:20:31.116132 kernel: Run /init as init process Feb 14 00:20:31.116145 kernel: with arguments: Feb 14 00:20:31.116158 kernel: /init Feb 14 00:20:31.116171 kernel: with environment: Feb 14 00:20:31.116190 kernel: HOME=/ Feb 14 00:20:31.116203 kernel: TERM=linux Feb 14 00:20:31.116216 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 14 00:20:31.116232 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 00:20:31.116249 systemd[1]: Detected virtualization kvm. Feb 14 00:20:31.116263 systemd[1]: Detected architecture x86-64. Feb 14 00:20:31.116277 systemd[1]: Running in initrd. Feb 14 00:20:31.116296 systemd[1]: No hostname configured, using default hostname. Feb 14 00:20:31.116310 systemd[1]: Hostname set to . Feb 14 00:20:31.116324 systemd[1]: Initializing machine ID from VM UUID. Feb 14 00:20:31.116338 systemd[1]: Queued start job for default target initrd.target. Feb 14 00:20:31.116369 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 00:20:31.116383 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 00:20:31.116398 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 14 00:20:31.116412 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 00:20:31.116433 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 14 00:20:31.116448 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 14 00:20:31.116464 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 14 00:20:31.116479 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 14 00:20:31.116493 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 00:20:31.116508 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 00:20:31.116522 systemd[1]: Reached target paths.target - Path Units. Feb 14 00:20:31.116541 systemd[1]: Reached target slices.target - Slice Units. Feb 14 00:20:31.116556 systemd[1]: Reached target swap.target - Swaps. Feb 14 00:20:31.116570 systemd[1]: Reached target timers.target - Timer Units. Feb 14 00:20:31.116584 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 00:20:31.116598 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 00:20:31.116613 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 14 00:20:31.116627 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 14 00:20:31.116641 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 00:20:31.116655 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 00:20:31.116675 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 00:20:31.116701 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 00:20:31.116716 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 14 00:20:31.116736 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 00:20:31.116751 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 14 00:20:31.116765 systemd[1]: Starting systemd-fsck-usr.service... Feb 14 00:20:31.116779 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 00:20:31.116793 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 00:20:31.116812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:20:31.116827 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 14 00:20:31.116885 systemd-journald[201]: Collecting audit messages is disabled. Feb 14 00:20:31.116918 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 00:20:31.116939 systemd[1]: Finished systemd-fsck-usr.service. Feb 14 00:20:31.116954 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 00:20:31.116970 systemd-journald[201]: Journal started Feb 14 00:20:31.117001 systemd-journald[201]: Runtime Journal (/run/log/journal/eb0c818f0d734909ab8ac19dc1b5c3f6) is 4.7M, max 38.0M, 33.2M free. Feb 14 00:20:31.067424 systemd-modules-load[202]: Inserted module 'overlay' Feb 14 00:20:31.173835 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 14 00:20:31.173871 kernel: Bridge firewalling registered Feb 14 00:20:31.173891 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 00:20:31.126151 systemd-modules-load[202]: Inserted module 'br_netfilter' Feb 14 00:20:31.178239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 00:20:31.180446 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:20:31.185075 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 00:20:31.199534 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 00:20:31.201543 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 00:20:31.212996 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 00:20:31.214671 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 00:20:31.229374 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:20:31.232386 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 00:20:31.244581 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 14 00:20:31.246128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 00:20:31.250493 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 00:20:31.260548 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 00:20:31.266927 dracut-cmdline[233]: dracut-dracut-053 Feb 14 00:20:31.275612 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 14 00:20:31.304063 systemd-resolved[239]: Positive Trust Anchors: Feb 14 00:20:31.304084 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 00:20:31.304127 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 00:20:31.314233 systemd-resolved[239]: Defaulting to hostname 'linux'. Feb 14 00:20:31.315988 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 00:20:31.317450 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 00:20:31.396423 kernel: SCSI subsystem initialized Feb 14 00:20:31.408393 kernel: Loading iSCSI transport class v2.0-870. Feb 14 00:20:31.422386 kernel: iscsi: registered transport (tcp) Feb 14 00:20:31.450553 kernel: iscsi: registered transport (qla4xxx) Feb 14 00:20:31.450660 kernel: QLogic iSCSI HBA Driver Feb 14 00:20:31.506724 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 14 00:20:31.513613 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 14 00:20:31.548003 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 14 00:20:31.548083 kernel: device-mapper: uevent: version 1.0.3 Feb 14 00:20:31.548933 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 14 00:20:31.599390 kernel: raid6: sse2x4 gen() 13804 MB/s Feb 14 00:20:31.617399 kernel: raid6: sse2x2 gen() 9489 MB/s Feb 14 00:20:31.636023 kernel: raid6: sse2x1 gen() 9922 MB/s Feb 14 00:20:31.636080 kernel: raid6: using algorithm sse2x4 gen() 13804 MB/s Feb 14 00:20:31.655080 kernel: raid6: .... xor() 7568 MB/s, rmw enabled Feb 14 00:20:31.655160 kernel: raid6: using ssse3x2 recovery algorithm Feb 14 00:20:31.681390 kernel: xor: automatically using best checksumming function avx Feb 14 00:20:31.880422 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 14 00:20:31.894962 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 14 00:20:31.904547 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 00:20:31.923319 systemd-udevd[419]: Using default interface naming scheme 'v255'. Feb 14 00:20:31.931013 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 00:20:31.938770 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 14 00:20:31.965267 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Feb 14 00:20:32.005500 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 00:20:32.012549 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 00:20:32.126443 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 00:20:32.135778 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 14 00:20:32.172119 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 14 00:20:32.174383 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 00:20:32.176674 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 00:20:32.179301 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 00:20:32.188566 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 14 00:20:32.220279 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 14 00:20:32.257075 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 14 00:20:32.363213 kernel: cryptd: max_cpu_qlen set to 1000 Feb 14 00:20:32.363251 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 14 00:20:32.363495 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 14 00:20:32.363529 kernel: GPT:17805311 != 125829119 Feb 14 00:20:32.363548 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 14 00:20:32.363565 kernel: GPT:17805311 != 125829119 Feb 14 00:20:32.363583 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 14 00:20:32.363600 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 00:20:32.363618 kernel: AVX version of gcm_enc/dec engaged. Feb 14 00:20:32.363635 kernel: AES CTR mode by8 optimization enabled Feb 14 00:20:32.363653 kernel: libata version 3.00 loaded. Feb 14 00:20:32.363686 kernel: ahci 0000:00:1f.2: version 3.0 Feb 14 00:20:32.379633 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 14 00:20:32.379681 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 14 00:20:32.379895 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 14 00:20:32.380090 kernel: scsi host0: ahci Feb 14 00:20:32.380291 kernel: scsi host1: ahci Feb 14 00:20:32.380512 kernel: ACPI: bus type USB registered Feb 14 00:20:32.380533 kernel: scsi host2: ahci Feb 14 00:20:32.380747 kernel: usbcore: registered new interface driver usbfs Feb 14 00:20:32.380769 kernel: usbcore: registered new interface driver hub Feb 14 00:20:32.380787 kernel: usbcore: registered new device driver usb Feb 14 00:20:32.380805 kernel: scsi host3: ahci Feb 14 00:20:32.380996 kernel: scsi host4: ahci Feb 14 00:20:32.381196 kernel: scsi host5: ahci Feb 14 00:20:32.381556 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Feb 14 00:20:32.381598 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Feb 14 00:20:32.381617 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Feb 14 00:20:32.381644 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Feb 14 00:20:32.381673 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Feb 14 00:20:32.381693 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Feb 14 00:20:32.296714 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 00:20:32.483932 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (466) Feb 14 00:20:32.484012 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) Feb 14 00:20:32.296900 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:20:32.297911 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 00:20:32.298686 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 00:20:32.298848 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:20:32.299630 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:20:32.323493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:20:32.447664 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 14 00:20:32.485184 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:20:32.492865 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 14 00:20:32.500866 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 14 00:20:32.507111 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 14 00:20:32.508077 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 14 00:20:32.521617 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 14 00:20:32.525532 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 00:20:32.539387 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 00:20:32.542366 disk-uuid[554]: Primary Header is updated. Feb 14 00:20:32.542366 disk-uuid[554]: Secondary Entries is updated. Feb 14 00:20:32.542366 disk-uuid[554]: Secondary Header is updated. Feb 14 00:20:32.575376 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:20:32.688382 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 14 00:20:32.688451 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 14 00:20:32.690386 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 14 00:20:32.697363 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 14 00:20:32.697399 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 14 00:20:32.700287 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 14 00:20:32.717370 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 14 00:20:32.749323 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 14 00:20:32.749593 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 14 00:20:32.749816 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 14 00:20:32.750030 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 14 00:20:32.750229 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 14 00:20:32.750444 kernel: hub 1-0:1.0: USB hub found Feb 14 00:20:32.750671 kernel: hub 1-0:1.0: 4 ports detected Feb 14 00:20:32.750870 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 14 00:20:32.751110 kernel: hub 2-0:1.0: USB hub found Feb 14 00:20:32.751314 kernel: hub 2-0:1.0: 4 ports detected Feb 14 00:20:32.985379 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 14 00:20:33.126377 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 14 00:20:33.133069 kernel: usbcore: registered new interface driver usbhid Feb 14 00:20:33.133127 kernel: usbhid: USB HID core driver Feb 14 00:20:33.140817 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Feb 14 00:20:33.140873 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 14 00:20:33.561379 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 00:20:33.563188 disk-uuid[555]: The operation has completed successfully. Feb 14 00:20:33.613799 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 14 00:20:33.613976 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 14 00:20:33.636535 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 14 00:20:33.652794 sh[582]: Success Feb 14 00:20:33.671503 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Feb 14 00:20:33.742772 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 14 00:20:33.746504 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 14 00:20:33.747534 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 14 00:20:33.783438 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 14 00:20:33.783570 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 14 00:20:33.783603 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 14 00:20:33.785040 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 14 00:20:33.787453 kernel: BTRFS info (device dm-0): using free space tree Feb 14 00:20:33.798573 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 14 00:20:33.800242 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 14 00:20:33.805652 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 14 00:20:33.810618 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 14 00:20:33.826830 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 00:20:33.826912 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 14 00:20:33.826934 kernel: BTRFS info (device vda6): using free space tree Feb 14 00:20:33.838375 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 00:20:33.854869 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 00:20:33.854488 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 14 00:20:33.864393 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 14 00:20:33.872550 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 14 00:20:33.972815 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 00:20:33.985007 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 00:20:34.034077 ignition[681]: Ignition 2.19.0 Feb 14 00:20:34.034104 ignition[681]: Stage: fetch-offline Feb 14 00:20:34.034182 ignition[681]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:20:34.034201 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:20:34.039159 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 00:20:34.035435 ignition[681]: parsed url from cmdline: "" Feb 14 00:20:34.042068 systemd-networkd[766]: lo: Link UP Feb 14 00:20:34.035442 ignition[681]: no config URL provided Feb 14 00:20:34.042074 systemd-networkd[766]: lo: Gained carrier Feb 14 00:20:34.035453 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Feb 14 00:20:34.044756 systemd-networkd[766]: Enumeration completed Feb 14 00:20:34.035470 ignition[681]: no config at "/usr/lib/ignition/user.ign" Feb 14 00:20:34.045009 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 00:20:34.035479 ignition[681]: failed to fetch config: resource requires networking Feb 14 00:20:34.045274 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 00:20:34.035778 ignition[681]: Ignition finished successfully Feb 14 00:20:34.045280 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 00:20:34.046869 systemd[1]: Reached target network.target - Network. Feb 14 00:20:34.047259 systemd-networkd[766]: eth0: Link UP Feb 14 00:20:34.047265 systemd-networkd[766]: eth0: Gained carrier Feb 14 00:20:34.047276 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 00:20:34.055860 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 14 00:20:34.077578 ignition[774]: Ignition 2.19.0 Feb 14 00:20:34.077635 ignition[774]: Stage: fetch Feb 14 00:20:34.077943 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:20:34.078008 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:20:34.078198 ignition[774]: parsed url from cmdline: "" Feb 14 00:20:34.078205 ignition[774]: no config URL provided Feb 14 00:20:34.078215 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Feb 14 00:20:34.078278 ignition[774]: no config at "/usr/lib/ignition/user.ign" Feb 14 00:20:34.078433 ignition[774]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 14 00:20:34.078485 ignition[774]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 14 00:20:34.078677 ignition[774]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 14 00:20:34.079097 ignition[774]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 14 00:20:34.105436 systemd-networkd[766]: eth0: DHCPv4 address 10.230.16.158/30, gateway 10.230.16.157 acquired from 10.230.16.157 Feb 14 00:20:34.279214 ignition[774]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Feb 14 00:20:34.292807 ignition[774]: GET result: OK Feb 14 00:20:34.292944 ignition[774]: parsing config with SHA512: b70bec5a947945ea7f68e898fb9c9f46e3e8b35acdfeacaf1c7f1f3fea9ca7bcb3e772c043f8960488ecf154dc4974af552c68b16964f4765cd179f234a1a2c3 Feb 14 00:20:34.298327 unknown[774]: fetched base config from "system" Feb 14 00:20:34.299113 ignition[774]: fetch: fetch complete Feb 14 00:20:34.298362 unknown[774]: fetched base config from "system" Feb 14 00:20:34.299123 ignition[774]: fetch: fetch passed Feb 14 00:20:34.298374 unknown[774]: fetched user config from "openstack" Feb 14 00:20:34.299203 ignition[774]: Ignition finished successfully Feb 14 00:20:34.302580 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 14 00:20:34.315554 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 14 00:20:34.337926 ignition[781]: Ignition 2.19.0 Feb 14 00:20:34.337947 ignition[781]: Stage: kargs Feb 14 00:20:34.338180 ignition[781]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:20:34.341087 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 14 00:20:34.338201 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:20:34.339773 ignition[781]: kargs: kargs passed Feb 14 00:20:34.339844 ignition[781]: Ignition finished successfully Feb 14 00:20:34.348557 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 14 00:20:34.368758 ignition[787]: Ignition 2.19.0 Feb 14 00:20:34.368781 ignition[787]: Stage: disks Feb 14 00:20:34.369003 ignition[787]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:20:34.371305 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 14 00:20:34.369023 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:20:34.374010 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 14 00:20:34.370161 ignition[787]: disks: disks passed Feb 14 00:20:34.375140 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 14 00:20:34.370230 ignition[787]: Ignition finished successfully Feb 14 00:20:34.376873 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 00:20:34.378486 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 00:20:34.379795 systemd[1]: Reached target basic.target - Basic System. Feb 14 00:20:34.388549 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 14 00:20:34.407078 systemd-fsck[795]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 14 00:20:34.410731 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 14 00:20:34.418463 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 14 00:20:34.542372 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 14 00:20:34.543708 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 14 00:20:34.545063 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 14 00:20:34.551449 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 00:20:34.567708 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 14 00:20:34.571061 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 14 00:20:34.572854 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 14 00:20:34.575821 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 14 00:20:34.575874 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 00:20:34.589825 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Feb 14 00:20:34.589860 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 00:20:34.589909 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 14 00:20:34.589932 kernel: BTRFS info (device vda6): using free space tree Feb 14 00:20:34.593386 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 00:20:34.595904 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 14 00:20:34.599464 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 00:20:34.617964 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 14 00:20:34.698825 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Feb 14 00:20:34.708650 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Feb 14 00:20:34.718865 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Feb 14 00:20:34.726391 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Feb 14 00:20:34.839872 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 14 00:20:34.847450 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 14 00:20:34.850574 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 14 00:20:34.861116 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 14 00:20:34.863438 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 00:20:34.896453 ignition[922]: INFO : Ignition 2.19.0 Feb 14 00:20:34.896453 ignition[922]: INFO : Stage: mount Feb 14 00:20:34.898243 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 00:20:34.898243 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:20:34.898832 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 14 00:20:34.902020 ignition[922]: INFO : mount: mount passed Feb 14 00:20:34.902020 ignition[922]: INFO : Ignition finished successfully Feb 14 00:20:34.903230 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 14 00:20:35.389681 systemd-networkd[766]: eth0: Gained IPv6LL Feb 14 00:20:36.209184 systemd-networkd[766]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8427:24:19ff:fee6:109e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8427:24:19ff:fee6:109e/64 assigned by NDisc. Feb 14 00:20:36.209207 systemd-networkd[766]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 14 00:20:41.781712 coreos-metadata[805]: Feb 14 00:20:41.781 WARN failed to locate config-drive, using the metadata service API instead Feb 14 00:20:41.806824 coreos-metadata[805]: Feb 14 00:20:41.806 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 14 00:20:41.822964 coreos-metadata[805]: Feb 14 00:20:41.822 INFO Fetch successful Feb 14 00:20:41.823917 coreos-metadata[805]: Feb 14 00:20:41.823 INFO wrote hostname srv-skbpq.gb1.brightbox.com to /sysroot/etc/hostname Feb 14 00:20:41.828069 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 14 00:20:41.828251 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 14 00:20:41.836544 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 14 00:20:41.852585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 00:20:41.867402 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Feb 14 00:20:41.873623 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 14 00:20:41.873659 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 14 00:20:41.873678 kernel: BTRFS info (device vda6): using free space tree Feb 14 00:20:41.878378 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 00:20:41.881688 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 00:20:41.915123 ignition[956]: INFO : Ignition 2.19.0 Feb 14 00:20:41.915123 ignition[956]: INFO : Stage: files Feb 14 00:20:41.917008 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 00:20:41.917008 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:20:41.917008 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Feb 14 00:20:41.919876 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 14 00:20:41.919876 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 14 00:20:41.922067 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 14 00:20:41.923264 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 14 00:20:41.924801 unknown[956]: wrote ssh authorized keys file for user: core Feb 14 00:20:41.925983 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 14 00:20:41.929337 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 14 00:20:41.929337 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 14 00:20:42.171394 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 14 00:20:47.587903 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 14 00:20:47.590380 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 14 00:20:47.590380 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 14 00:20:48.232759 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 14 00:20:48.642135 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 14 00:20:48.644131 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 14 00:20:48.644131 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 14 00:20:48.644131 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 14 00:20:48.648456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 14 00:20:48.648456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 00:20:48.648456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 00:20:48.648456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 00:20:48.648456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 00:20:48.648456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 00:20:48.648456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 00:20:48.648456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 14 00:20:48.648456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 14 00:20:48.648456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 14 00:20:48.648456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 14 00:20:49.120899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 14 00:20:50.674538 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 14 00:20:50.674538 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 14 00:20:50.678059 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 00:20:50.678059 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 00:20:50.678059 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 14 00:20:50.678059 ignition[956]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 14 00:20:50.678059 ignition[956]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 14 00:20:50.678059 ignition[956]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 14 00:20:50.678059 ignition[956]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 14 00:20:50.678059 ignition[956]: INFO : files: files passed Feb 14 00:20:50.689012 ignition[956]: INFO : Ignition finished successfully Feb 14 00:20:50.680560 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 14 00:20:50.691681 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 14 00:20:50.700583 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 14 00:20:50.706379 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 14 00:20:50.706557 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 14 00:20:50.719220 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 00:20:50.719220 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 14 00:20:50.723680 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 00:20:50.726113 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 00:20:50.728854 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 14 00:20:50.736602 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 14 00:20:50.770103 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 14 00:20:50.770291 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 14 00:20:50.772465 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 14 00:20:50.773852 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 14 00:20:50.775551 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 14 00:20:50.781558 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 14 00:20:50.801009 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 00:20:50.807589 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 14 00:20:50.833038 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 14 00:20:50.834090 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 00:20:50.835939 systemd[1]: Stopped target timers.target - Timer Units. Feb 14 00:20:50.837587 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 14 00:20:50.837781 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 00:20:50.839612 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 14 00:20:50.840680 systemd[1]: Stopped target basic.target - Basic System. Feb 14 00:20:50.842196 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 14 00:20:50.843780 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 00:20:50.845157 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 14 00:20:50.846807 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 14 00:20:50.848521 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 00:20:50.850138 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 14 00:20:50.851683 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 14 00:20:50.853283 systemd[1]: Stopped target swap.target - Swaps. Feb 14 00:20:50.854759 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 14 00:20:50.854958 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 14 00:20:50.856732 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 14 00:20:50.857761 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 00:20:50.859273 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 14 00:20:50.859479 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 00:20:50.861004 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 14 00:20:50.861259 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 14 00:20:50.863233 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 14 00:20:50.863440 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 00:20:50.865127 systemd[1]: ignition-files.service: Deactivated successfully. Feb 14 00:20:50.865282 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 14 00:20:50.872711 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 14 00:20:50.873514 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 14 00:20:50.873786 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 00:20:50.885227 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 14 00:20:50.889453 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 14 00:20:50.889667 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 00:20:50.892557 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 14 00:20:50.892752 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 00:20:50.902730 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 14 00:20:50.903820 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 14 00:20:50.919388 ignition[1009]: INFO : Ignition 2.19.0 Feb 14 00:20:50.919388 ignition[1009]: INFO : Stage: umount Feb 14 00:20:50.923362 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 00:20:50.923362 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 14 00:20:50.923362 ignition[1009]: INFO : umount: umount passed Feb 14 00:20:50.923142 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 14 00:20:50.928777 ignition[1009]: INFO : Ignition finished successfully Feb 14 00:20:50.928110 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 14 00:20:50.928327 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 14 00:20:50.930889 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 14 00:20:50.931063 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 14 00:20:50.933126 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 14 00:20:50.933234 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 14 00:20:50.934414 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 14 00:20:50.934485 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 14 00:20:50.935798 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 14 00:20:50.935874 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 14 00:20:50.937216 systemd[1]: Stopped target network.target - Network. Feb 14 00:20:50.938546 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 14 00:20:50.938631 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 00:20:50.940029 systemd[1]: Stopped target paths.target - Path Units. Feb 14 00:20:50.941408 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 14 00:20:50.941494 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 00:20:50.942895 systemd[1]: Stopped target slices.target - Slice Units. Feb 14 00:20:50.944280 systemd[1]: Stopped target sockets.target - Socket Units. Feb 14 00:20:50.945857 systemd[1]: iscsid.socket: Deactivated successfully. Feb 14 00:20:50.945962 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 00:20:50.947302 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 14 00:20:50.947423 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 00:20:50.948918 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 14 00:20:50.949056 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 14 00:20:50.950328 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 14 00:20:50.950424 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 14 00:20:50.951954 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 14 00:20:50.952063 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 14 00:20:50.954101 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 14 00:20:50.956163 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 14 00:20:50.959585 systemd-networkd[766]: eth0: DHCPv6 lease lost Feb 14 00:20:50.965245 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 14 00:20:50.965805 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 14 00:20:50.967719 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 14 00:20:50.967905 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 14 00:20:50.972634 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 14 00:20:50.972703 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 14 00:20:50.979638 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 14 00:20:50.980438 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 14 00:20:50.980533 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 00:20:50.982830 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 14 00:20:50.982908 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 14 00:20:50.983654 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 14 00:20:50.983724 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 14 00:20:50.985894 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 14 00:20:50.985966 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 00:20:50.987475 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 00:20:50.997960 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 14 00:20:50.998221 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 00:20:51.000997 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 14 00:20:51.001130 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 14 00:20:51.004533 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 14 00:20:51.004589 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 00:20:51.006852 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 14 00:20:51.006936 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 14 00:20:51.009221 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 14 00:20:51.009304 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 14 00:20:51.010882 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 00:20:51.010974 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:20:51.018626 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 14 00:20:51.019470 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 14 00:20:51.019561 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 00:20:51.021253 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 00:20:51.021380 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:20:51.024534 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 14 00:20:51.024736 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 14 00:20:51.032734 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 14 00:20:51.032926 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 14 00:20:51.034832 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 14 00:20:51.045675 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 14 00:20:51.056867 systemd[1]: Switching root. Feb 14 00:20:51.092395 systemd-journald[201]: Journal stopped Feb 14 00:20:52.729931 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Feb 14 00:20:52.730123 kernel: SELinux: policy capability network_peer_controls=1 Feb 14 00:20:52.730166 kernel: SELinux: policy capability open_perms=1 Feb 14 00:20:52.730216 kernel: SELinux: policy capability extended_socket_class=1 Feb 14 00:20:52.730241 kernel: SELinux: policy capability always_check_network=0 Feb 14 00:20:52.730288 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 14 00:20:52.730329 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 14 00:20:52.732401 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 14 00:20:52.732428 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 14 00:20:52.732458 kernel: audit: type=1403 audit(1739492451.494:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 14 00:20:52.732492 systemd[1]: Successfully loaded SELinux policy in 64.840ms. Feb 14 00:20:52.732536 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.654ms. Feb 14 00:20:52.732566 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 00:20:52.732606 systemd[1]: Detected virtualization kvm. Feb 14 00:20:52.732643 systemd[1]: Detected architecture x86-64. Feb 14 00:20:52.732672 systemd[1]: Detected first boot. Feb 14 00:20:52.732693 systemd[1]: Hostname set to . Feb 14 00:20:52.732713 systemd[1]: Initializing machine ID from VM UUID. Feb 14 00:20:52.732741 zram_generator::config[1051]: No configuration found. Feb 14 00:20:52.732770 systemd[1]: Populated /etc with preset unit settings. Feb 14 00:20:52.732792 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 14 00:20:52.732825 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 14 00:20:52.732848 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 14 00:20:52.732871 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 14 00:20:52.732892 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 14 00:20:52.732919 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 14 00:20:52.732941 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 14 00:20:52.732962 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 14 00:20:52.732985 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 14 00:20:52.733012 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 14 00:20:52.733046 systemd[1]: Created slice user.slice - User and Session Slice. Feb 14 00:20:52.733075 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 00:20:52.733104 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 00:20:52.733131 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 14 00:20:52.733171 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 14 00:20:52.733194 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 14 00:20:52.733221 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 00:20:52.733249 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 14 00:20:52.733270 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 00:20:52.733313 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 14 00:20:52.733336 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 14 00:20:52.733378 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 14 00:20:52.733401 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 14 00:20:52.733421 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 00:20:52.733450 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 00:20:52.733506 systemd[1]: Reached target slices.target - Slice Units. Feb 14 00:20:52.733541 systemd[1]: Reached target swap.target - Swaps. Feb 14 00:20:52.733576 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 14 00:20:52.733613 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 14 00:20:52.733648 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 00:20:52.733670 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 00:20:52.733711 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 00:20:52.733734 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 14 00:20:52.733754 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 14 00:20:52.733775 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 14 00:20:52.733802 systemd[1]: Mounting media.mount - External Media Directory... Feb 14 00:20:52.733833 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:20:52.733854 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 14 00:20:52.733876 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 14 00:20:52.733902 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 14 00:20:52.733947 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 14 00:20:52.733969 systemd[1]: Reached target machines.target - Containers. Feb 14 00:20:52.733990 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 14 00:20:52.734010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 00:20:52.734037 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 00:20:52.734059 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 14 00:20:52.734094 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 00:20:52.734116 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 00:20:52.734150 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 00:20:52.734180 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 14 00:20:52.734202 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 00:20:52.734223 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 14 00:20:52.734244 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 14 00:20:52.734265 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 14 00:20:52.734296 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 14 00:20:52.734318 systemd[1]: Stopped systemd-fsck-usr.service. Feb 14 00:20:52.736386 kernel: fuse: init (API version 7.39) Feb 14 00:20:52.736428 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 00:20:52.736452 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 00:20:52.736473 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 14 00:20:52.736494 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 14 00:20:52.736521 kernel: loop: module loaded Feb 14 00:20:52.736548 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 00:20:52.736575 systemd[1]: verity-setup.service: Deactivated successfully. Feb 14 00:20:52.736596 systemd[1]: Stopped verity-setup.service. Feb 14 00:20:52.736618 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:20:52.736668 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 14 00:20:52.736691 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 14 00:20:52.736711 systemd[1]: Mounted media.mount - External Media Directory. Feb 14 00:20:52.736732 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 14 00:20:52.736767 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 14 00:20:52.736790 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 14 00:20:52.736810 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 00:20:52.736882 systemd-journald[1140]: Collecting audit messages is disabled. Feb 14 00:20:52.736937 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 14 00:20:52.736961 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 14 00:20:52.736983 systemd-journald[1140]: Journal started Feb 14 00:20:52.737031 systemd-journald[1140]: Runtime Journal (/run/log/journal/eb0c818f0d734909ab8ac19dc1b5c3f6) is 4.7M, max 38.0M, 33.2M free. Feb 14 00:20:52.307908 systemd[1]: Queued start job for default target multi-user.target. Feb 14 00:20:52.334401 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 14 00:20:52.335085 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 14 00:20:52.750395 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 00:20:52.752636 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 00:20:52.752891 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 00:20:52.754073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 00:20:52.754446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 00:20:52.756147 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 14 00:20:52.757123 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 14 00:20:52.759680 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 00:20:52.759908 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 00:20:52.761038 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 00:20:52.762768 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 14 00:20:52.763905 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 14 00:20:52.771382 kernel: ACPI: bus type drm_connector registered Feb 14 00:20:52.771664 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 14 00:20:52.775151 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 00:20:52.775587 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 00:20:52.787841 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 14 00:20:52.799446 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 14 00:20:52.811425 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 14 00:20:52.812266 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 14 00:20:52.812358 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 00:20:52.815246 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 14 00:20:52.822654 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 14 00:20:52.825548 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 14 00:20:52.826864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 00:20:52.834593 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 14 00:20:52.843516 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 14 00:20:52.844383 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 00:20:52.854602 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 14 00:20:52.855544 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 00:20:52.861561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 00:20:52.866990 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 14 00:20:52.878536 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 14 00:20:52.884853 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 14 00:20:52.885873 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 14 00:20:52.888061 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 14 00:20:52.899407 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 14 00:20:52.903315 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 14 00:20:52.908370 kernel: loop0: detected capacity change from 0 to 142488 Feb 14 00:20:52.909630 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 14 00:20:52.921222 systemd-journald[1140]: Time spent on flushing to /var/log/journal/eb0c818f0d734909ab8ac19dc1b5c3f6 is 105.769ms for 1147 entries. Feb 14 00:20:52.921222 systemd-journald[1140]: System Journal (/var/log/journal/eb0c818f0d734909ab8ac19dc1b5c3f6) is 8.0M, max 584.8M, 576.8M free. Feb 14 00:20:53.056619 systemd-journald[1140]: Received client request to flush runtime journal. Feb 14 00:20:53.056682 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 14 00:20:53.056709 kernel: loop1: detected capacity change from 0 to 8 Feb 14 00:20:53.056734 kernel: loop2: detected capacity change from 0 to 210664 Feb 14 00:20:53.018764 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 14 00:20:53.021942 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 14 00:20:53.036467 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 00:20:53.060874 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 14 00:20:53.080064 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 14 00:20:53.093670 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 00:20:53.097544 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 00:20:53.107575 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 14 00:20:53.108442 kernel: loop3: detected capacity change from 0 to 140768 Feb 14 00:20:53.143423 udevadm[1205]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 14 00:20:53.157530 kernel: loop4: detected capacity change from 0 to 142488 Feb 14 00:20:53.196562 kernel: loop5: detected capacity change from 0 to 8 Feb 14 00:20:53.196667 kernel: loop6: detected capacity change from 0 to 210664 Feb 14 00:20:53.203855 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Feb 14 00:20:53.204491 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Feb 14 00:20:53.220394 kernel: loop7: detected capacity change from 0 to 140768 Feb 14 00:20:53.227447 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 00:20:53.255667 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 14 00:20:53.257305 (sd-merge)[1208]: Merged extensions into '/usr'. Feb 14 00:20:53.266819 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Feb 14 00:20:53.266863 systemd[1]: Reloading... Feb 14 00:20:53.469394 zram_generator::config[1235]: No configuration found. Feb 14 00:20:53.687940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 00:20:53.710375 ldconfig[1179]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 14 00:20:53.758923 systemd[1]: Reloading finished in 491 ms. Feb 14 00:20:53.792063 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 14 00:20:53.793755 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 14 00:20:53.807683 systemd[1]: Starting ensure-sysext.service... Feb 14 00:20:53.816733 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 00:20:53.833527 systemd[1]: Reloading requested from client PID 1291 ('systemctl') (unit ensure-sysext.service)... Feb 14 00:20:53.833546 systemd[1]: Reloading... Feb 14 00:20:53.894991 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 14 00:20:53.895653 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 14 00:20:53.897793 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 14 00:20:53.898197 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Feb 14 00:20:53.898330 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Feb 14 00:20:53.910896 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 00:20:53.910918 systemd-tmpfiles[1292]: Skipping /boot Feb 14 00:20:53.958823 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 00:20:53.958846 systemd-tmpfiles[1292]: Skipping /boot Feb 14 00:20:53.992384 zram_generator::config[1318]: No configuration found. Feb 14 00:20:54.179609 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 00:20:54.249415 systemd[1]: Reloading finished in 415 ms. Feb 14 00:20:54.271519 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 14 00:20:54.285402 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 00:20:54.301571 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 14 00:20:54.306547 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 14 00:20:54.311564 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 14 00:20:54.318606 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 00:20:54.321562 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 00:20:54.332577 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 14 00:20:54.341556 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:20:54.341845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 00:20:54.350729 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 00:20:54.362130 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 00:20:54.372703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 00:20:54.373753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 00:20:54.373926 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:20:54.379530 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:20:54.379814 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 00:20:54.380036 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 00:20:54.390537 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 14 00:20:54.392220 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:20:54.393435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 00:20:54.393691 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 00:20:54.401892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 00:20:54.402495 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 00:20:54.414498 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:20:54.414901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 00:20:54.423730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 00:20:54.427644 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 00:20:54.431659 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 00:20:54.433681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 00:20:54.433894 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 14 00:20:54.435712 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 14 00:20:54.441137 systemd[1]: Finished ensure-sysext.service. Feb 14 00:20:54.442901 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 14 00:20:54.461580 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 14 00:20:54.471529 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 14 00:20:54.473422 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 14 00:20:54.477229 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 14 00:20:54.484617 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 00:20:54.485434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 00:20:54.490279 augenrules[1412]: No rules Feb 14 00:20:54.495455 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 14 00:20:54.502822 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 00:20:54.503465 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 00:20:54.505944 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 00:20:54.506218 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 00:20:54.507591 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 00:20:54.507809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 00:20:54.512864 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 00:20:54.512975 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 00:20:54.518850 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 14 00:20:54.534878 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Feb 14 00:20:54.541796 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 14 00:20:54.579494 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 00:20:54.588580 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 00:20:54.700729 systemd-networkd[1430]: lo: Link UP Feb 14 00:20:54.700743 systemd-networkd[1430]: lo: Gained carrier Feb 14 00:20:54.701804 systemd-networkd[1430]: Enumeration completed Feb 14 00:20:54.701943 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 00:20:54.718627 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 14 00:20:54.728963 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 14 00:20:54.729954 systemd[1]: Reached target time-set.target - System Time Set. Feb 14 00:20:54.739810 systemd-resolved[1381]: Positive Trust Anchors: Feb 14 00:20:54.741729 systemd-resolved[1381]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 00:20:54.741782 systemd-resolved[1381]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 00:20:54.754973 systemd-resolved[1381]: Using system hostname 'srv-skbpq.gb1.brightbox.com'. Feb 14 00:20:54.758170 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 00:20:54.759113 systemd[1]: Reached target network.target - Network. Feb 14 00:20:54.760393 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 00:20:54.772794 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 14 00:20:54.829399 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1440) Feb 14 00:20:54.874632 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 00:20:54.874647 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 00:20:54.876763 systemd-networkd[1430]: eth0: Link UP Feb 14 00:20:54.876785 systemd-networkd[1430]: eth0: Gained carrier Feb 14 00:20:54.876805 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 00:20:54.928483 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 14 00:20:54.919448 systemd-networkd[1430]: eth0: DHCPv4 address 10.230.16.158/30, gateway 10.230.16.157 acquired from 10.230.16.157 Feb 14 00:20:54.920500 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Feb 14 00:20:54.921689 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 14 00:20:54.932774 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 14 00:20:54.945390 kernel: mousedev: PS/2 mouse device common for all mice Feb 14 00:20:54.960964 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 14 00:20:54.961970 kernel: ACPI: button: Power Button [PWRF] Feb 14 00:20:55.004384 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 14 00:20:55.013184 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 14 00:20:55.018789 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 14 00:20:55.031398 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 14 00:20:55.089496 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:20:55.277769 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:20:55.297953 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 14 00:20:55.305570 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 14 00:20:55.322929 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 00:20:55.361007 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 14 00:20:55.362290 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 00:20:55.363133 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 00:20:55.364043 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 14 00:20:55.364907 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 14 00:20:55.366102 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 14 00:20:55.367205 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 14 00:20:55.368031 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 14 00:20:55.368860 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 14 00:20:55.368912 systemd[1]: Reached target paths.target - Path Units. Feb 14 00:20:55.369644 systemd[1]: Reached target timers.target - Timer Units. Feb 14 00:20:55.372839 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 14 00:20:55.376559 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 14 00:20:55.382653 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 14 00:20:55.385278 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 14 00:20:55.386680 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 14 00:20:55.387569 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 00:20:55.388329 systemd[1]: Reached target basic.target - Basic System. Feb 14 00:20:55.389191 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 14 00:20:55.389259 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 14 00:20:55.392532 systemd[1]: Starting containerd.service - containerd container runtime... Feb 14 00:20:55.398824 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 00:20:55.402585 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 14 00:20:55.405685 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 14 00:20:55.412436 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 14 00:20:55.421687 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 14 00:20:55.423437 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 14 00:20:55.426487 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 14 00:20:55.433480 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 14 00:20:55.442638 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 14 00:20:55.447579 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 14 00:20:55.461598 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 14 00:20:55.476448 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 14 00:20:55.489316 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 14 00:20:55.499703 systemd[1]: Starting update-engine.service - Update Engine... Feb 14 00:20:55.516500 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 14 00:20:55.521415 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 14 00:20:55.530502 jq[1475]: false Feb 14 00:20:55.531134 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 14 00:20:55.531480 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 14 00:20:55.533932 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 14 00:20:55.534818 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 14 00:20:55.552952 systemd[1]: motdgen.service: Deactivated successfully. Feb 14 00:20:55.556017 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 14 00:20:55.560126 dbus-daemon[1474]: [system] SELinux support is enabled Feb 14 00:20:55.569259 dbus-daemon[1474]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1430 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 14 00:20:55.574829 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 14 00:20:55.598378 jq[1492]: true Feb 14 00:20:55.616088 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 14 00:20:55.619568 tar[1495]: linux-amd64/helm Feb 14 00:20:55.617004 dbus-daemon[1474]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 14 00:20:55.617312 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 14 00:20:55.617394 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 14 00:20:55.622618 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 14 00:20:55.622660 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 14 00:20:55.638828 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 14 00:20:55.641984 jq[1508]: true Feb 14 00:20:55.653956 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 14 00:20:55.666824 extend-filesystems[1476]: Found loop4 Feb 14 00:20:55.666824 extend-filesystems[1476]: Found loop5 Feb 14 00:20:55.666824 extend-filesystems[1476]: Found loop6 Feb 14 00:20:55.666824 extend-filesystems[1476]: Found loop7 Feb 14 00:20:55.666824 extend-filesystems[1476]: Found vda Feb 14 00:20:55.666824 extend-filesystems[1476]: Found vda1 Feb 14 00:20:55.666824 extend-filesystems[1476]: Found vda2 Feb 14 00:20:55.666824 extend-filesystems[1476]: Found vda3 Feb 14 00:20:55.666824 extend-filesystems[1476]: Found usr Feb 14 00:20:55.666824 extend-filesystems[1476]: Found vda4 Feb 14 00:20:55.666824 extend-filesystems[1476]: Found vda6 Feb 14 00:20:55.666824 extend-filesystems[1476]: Found vda7 Feb 14 00:20:55.666824 extend-filesystems[1476]: Found vda9 Feb 14 00:20:55.666824 extend-filesystems[1476]: Checking size of /dev/vda9 Feb 14 00:20:55.696843 update_engine[1486]: I20250214 00:20:55.675550 1486 main.cc:92] Flatcar Update Engine starting Feb 14 00:20:55.696843 update_engine[1486]: I20250214 00:20:55.677875 1486 update_check_scheduler.cc:74] Next update check in 11m37s Feb 14 00:20:55.678239 systemd[1]: Started update-engine.service - Update Engine. Feb 14 00:20:55.682597 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 14 00:20:55.715421 systemd-logind[1482]: Watching system buttons on /dev/input/event2 (Power Button) Feb 14 00:20:55.715462 systemd-logind[1482]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 14 00:20:55.717785 systemd-logind[1482]: New seat seat0. Feb 14 00:20:55.719355 systemd[1]: Started systemd-logind.service - User Login Management. Feb 14 00:20:55.746628 systemd-timesyncd[1407]: Contacted time server 131.111.8.60:123 (0.flatcar.pool.ntp.org). Feb 14 00:20:55.746763 systemd-timesyncd[1407]: Initial clock synchronization to Fri 2025-02-14 00:20:56.047842 UTC. Feb 14 00:20:55.750556 extend-filesystems[1476]: Resized partition /dev/vda9 Feb 14 00:20:55.764326 extend-filesystems[1522]: resize2fs 1.47.1 (20-May-2024) Feb 14 00:20:55.772373 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 14 00:20:55.879161 bash[1532]: Updated "/home/core/.ssh/authorized_keys" Feb 14 00:20:55.880924 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 14 00:20:55.899797 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1438) Feb 14 00:20:55.895960 systemd[1]: Starting sshkeys.service... Feb 14 00:20:56.028015 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 14 00:20:56.038898 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 14 00:20:56.151408 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 14 00:20:56.183483 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 14 00:20:56.183483 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 14 00:20:56.183483 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 14 00:20:56.203135 extend-filesystems[1476]: Resized filesystem in /dev/vda9 Feb 14 00:20:56.190029 dbus-daemon[1474]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 14 00:20:56.185084 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 14 00:20:56.196029 dbus-daemon[1474]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1510 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 14 00:20:56.186485 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 14 00:20:56.191176 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 14 00:20:56.206823 systemd[1]: Starting polkit.service - Authorization Manager... Feb 14 00:20:56.260027 polkitd[1548]: Started polkitd version 121 Feb 14 00:20:56.279230 polkitd[1548]: Loading rules from directory /etc/polkit-1/rules.d Feb 14 00:20:56.283185 polkitd[1548]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 14 00:20:56.288716 polkitd[1548]: Finished loading, compiling and executing 2 rules Feb 14 00:20:56.291562 dbus-daemon[1474]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 14 00:20:56.291901 systemd[1]: Started polkit.service - Authorization Manager. Feb 14 00:20:56.293435 polkitd[1548]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 14 00:20:56.308431 containerd[1503]: time="2025-02-14T00:20:56.308187274Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 14 00:20:56.308689 locksmithd[1514]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 14 00:20:56.334149 systemd-hostnamed[1510]: Hostname set to (static) Feb 14 00:20:56.404237 containerd[1503]: time="2025-02-14T00:20:56.404161593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 14 00:20:56.410663 containerd[1503]: time="2025-02-14T00:20:56.410621901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:20:56.410727 containerd[1503]: time="2025-02-14T00:20:56.410664714Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 14 00:20:56.410727 containerd[1503]: time="2025-02-14T00:20:56.410690576Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 14 00:20:56.412391 containerd[1503]: time="2025-02-14T00:20:56.410964603Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 14 00:20:56.412391 containerd[1503]: time="2025-02-14T00:20:56.411006360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 14 00:20:56.412391 containerd[1503]: time="2025-02-14T00:20:56.411148254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:20:56.412391 containerd[1503]: time="2025-02-14T00:20:56.411186423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 14 00:20:56.412391 containerd[1503]: time="2025-02-14T00:20:56.411461802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:20:56.412391 containerd[1503]: time="2025-02-14T00:20:56.411498403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 14 00:20:56.412391 containerd[1503]: time="2025-02-14T00:20:56.411523618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:20:56.412391 containerd[1503]: time="2025-02-14T00:20:56.411551914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 14 00:20:56.412391 containerd[1503]: time="2025-02-14T00:20:56.411688224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 14 00:20:56.412391 containerd[1503]: time="2025-02-14T00:20:56.412058343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 14 00:20:56.412391 containerd[1503]: time="2025-02-14T00:20:56.412211817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:20:56.412808 containerd[1503]: time="2025-02-14T00:20:56.412277432Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 14 00:20:56.415290 containerd[1503]: time="2025-02-14T00:20:56.415249583Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 14 00:20:56.415420 containerd[1503]: time="2025-02-14T00:20:56.415365168Z" level=info msg="metadata content store policy set" policy=shared Feb 14 00:20:56.420587 containerd[1503]: time="2025-02-14T00:20:56.420552130Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 14 00:20:56.420692 containerd[1503]: time="2025-02-14T00:20:56.420663892Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 14 00:20:56.421340 containerd[1503]: time="2025-02-14T00:20:56.421310341Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 14 00:20:56.421426 containerd[1503]: time="2025-02-14T00:20:56.421352205Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 14 00:20:56.421470 containerd[1503]: time="2025-02-14T00:20:56.421430065Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 14 00:20:56.422407 containerd[1503]: time="2025-02-14T00:20:56.421646116Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 14 00:20:56.423149 containerd[1503]: time="2025-02-14T00:20:56.423116754Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 14 00:20:56.423412 containerd[1503]: time="2025-02-14T00:20:56.423382703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 14 00:20:56.423476 containerd[1503]: time="2025-02-14T00:20:56.423430374Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 14 00:20:56.423476 containerd[1503]: time="2025-02-14T00:20:56.423463170Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 14 00:20:56.423543 containerd[1503]: time="2025-02-14T00:20:56.423494251Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 14 00:20:56.423543 containerd[1503]: time="2025-02-14T00:20:56.423523087Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 14 00:20:56.423624 containerd[1503]: time="2025-02-14T00:20:56.423545904Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 14 00:20:56.423624 containerd[1503]: time="2025-02-14T00:20:56.423603188Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 14 00:20:56.423713 containerd[1503]: time="2025-02-14T00:20:56.423637356Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 14 00:20:56.423713 containerd[1503]: time="2025-02-14T00:20:56.423664720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 14 00:20:56.423713 containerd[1503]: time="2025-02-14T00:20:56.423691789Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 14 00:20:56.423853 containerd[1503]: time="2025-02-14T00:20:56.423718115Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 14 00:20:56.423853 containerd[1503]: time="2025-02-14T00:20:56.423767135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.423853 containerd[1503]: time="2025-02-14T00:20:56.423801379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.423853 containerd[1503]: time="2025-02-14T00:20:56.423825166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.424014 containerd[1503]: time="2025-02-14T00:20:56.423925144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.424014 containerd[1503]: time="2025-02-14T00:20:56.423970316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.424014 containerd[1503]: time="2025-02-14T00:20:56.424001476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.424130 containerd[1503]: time="2025-02-14T00:20:56.424026727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.424130 containerd[1503]: time="2025-02-14T00:20:56.424053232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.424130 containerd[1503]: time="2025-02-14T00:20:56.424077524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.424130 containerd[1503]: time="2025-02-14T00:20:56.424106927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.424265 containerd[1503]: time="2025-02-14T00:20:56.424133897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.424265 containerd[1503]: time="2025-02-14T00:20:56.424158463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.424265 containerd[1503]: time="2025-02-14T00:20:56.424184273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.424265 containerd[1503]: time="2025-02-14T00:20:56.424233013Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 14 00:20:56.426481 containerd[1503]: time="2025-02-14T00:20:56.424282543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.426481 containerd[1503]: time="2025-02-14T00:20:56.424312277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.426481 containerd[1503]: time="2025-02-14T00:20:56.424336597Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 14 00:20:56.431225 containerd[1503]: time="2025-02-14T00:20:56.431172909Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 14 00:20:56.431720 containerd[1503]: time="2025-02-14T00:20:56.431662246Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 14 00:20:56.431789 containerd[1503]: time="2025-02-14T00:20:56.431720890Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 14 00:20:56.431789 containerd[1503]: time="2025-02-14T00:20:56.431765553Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 14 00:20:56.431863 containerd[1503]: time="2025-02-14T00:20:56.431792989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.431863 containerd[1503]: time="2025-02-14T00:20:56.431822574Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 14 00:20:56.431863 containerd[1503]: time="2025-02-14T00:20:56.431849904Z" level=info msg="NRI interface is disabled by configuration." Feb 14 00:20:56.432046 containerd[1503]: time="2025-02-14T00:20:56.431875713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 14 00:20:56.433424 containerd[1503]: time="2025-02-14T00:20:56.432415895Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 14 00:20:56.433424 containerd[1503]: time="2025-02-14T00:20:56.432518136Z" level=info msg="Connect containerd service" Feb 14 00:20:56.433424 containerd[1503]: time="2025-02-14T00:20:56.432597952Z" level=info msg="using legacy CRI server" Feb 14 00:20:56.433424 containerd[1503]: time="2025-02-14T00:20:56.432622927Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 14 00:20:56.433424 containerd[1503]: time="2025-02-14T00:20:56.432781408Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 14 00:20:56.435049 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 14 00:20:56.437784 containerd[1503]: time="2025-02-14T00:20:56.437375077Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 14 00:20:56.437979 containerd[1503]: time="2025-02-14T00:20:56.437920534Z" level=info msg="Start subscribing containerd event" Feb 14 00:20:56.438786 containerd[1503]: time="2025-02-14T00:20:56.438755350Z" level=info msg="Start recovering state" Feb 14 00:20:56.439038 containerd[1503]: time="2025-02-14T00:20:56.439009510Z" level=info msg="Start event monitor" Feb 14 00:20:56.439411 containerd[1503]: time="2025-02-14T00:20:56.439206569Z" level=info msg="Start snapshots syncer" Feb 14 00:20:56.439411 containerd[1503]: time="2025-02-14T00:20:56.439247064Z" level=info msg="Start cni network conf syncer for default" Feb 14 00:20:56.439411 containerd[1503]: time="2025-02-14T00:20:56.439265360Z" level=info msg="Start streaming server" Feb 14 00:20:56.443203 containerd[1503]: time="2025-02-14T00:20:56.438466267Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 14 00:20:56.443203 containerd[1503]: time="2025-02-14T00:20:56.441463352Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 14 00:20:56.441707 systemd[1]: Started containerd.service - containerd container runtime. Feb 14 00:20:56.443562 containerd[1503]: time="2025-02-14T00:20:56.443532301Z" level=info msg="containerd successfully booted in 0.140230s" Feb 14 00:20:56.445645 systemd-networkd[1430]: eth0: Gained IPv6LL Feb 14 00:20:56.453993 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 14 00:20:56.455880 systemd[1]: Reached target network-online.target - Network is Online. Feb 14 00:20:56.465734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:20:56.472613 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 14 00:20:56.510092 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 14 00:20:56.523607 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 14 00:20:56.534345 systemd[1]: Started sshd@0-10.230.16.158:22-147.75.109.163:43020.service - OpenSSH per-connection server daemon (147.75.109.163:43020). Feb 14 00:20:56.555617 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 14 00:20:56.569573 systemd[1]: issuegen.service: Deactivated successfully. Feb 14 00:20:56.569855 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 14 00:20:56.584816 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 14 00:20:56.621882 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 14 00:20:56.631119 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 14 00:20:56.641933 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 14 00:20:56.643532 systemd[1]: Reached target getty.target - Login Prompts. Feb 14 00:20:56.797572 tar[1495]: linux-amd64/LICENSE Feb 14 00:20:56.797572 tar[1495]: linux-amd64/README.md Feb 14 00:20:56.811983 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 14 00:20:57.477159 sshd[1580]: Accepted publickey for core from 147.75.109.163 port 43020 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:20:57.480631 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:20:57.489055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:20:57.496981 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 00:20:57.505180 systemd-logind[1482]: New session 1 of user core. Feb 14 00:20:57.508609 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 14 00:20:57.517839 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 14 00:20:57.550069 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 14 00:20:57.561665 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 14 00:20:57.580767 (systemd)[1605]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 14 00:20:57.743744 systemd[1605]: Queued start job for default target default.target. Feb 14 00:20:57.755818 systemd[1605]: Created slice app.slice - User Application Slice. Feb 14 00:20:57.755987 systemd[1605]: Reached target paths.target - Paths. Feb 14 00:20:57.756124 systemd[1605]: Reached target timers.target - Timers. Feb 14 00:20:57.760545 systemd[1605]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 14 00:20:57.783687 systemd[1605]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 14 00:20:57.784683 systemd[1605]: Reached target sockets.target - Sockets. Feb 14 00:20:57.784722 systemd[1605]: Reached target basic.target - Basic System. Feb 14 00:20:57.784846 systemd[1605]: Reached target default.target - Main User Target. Feb 14 00:20:57.784914 systemd[1605]: Startup finished in 191ms. Feb 14 00:20:57.785096 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 14 00:20:57.794715 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 14 00:20:57.956607 systemd-networkd[1430]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8427:24:19ff:fee6:109e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8427:24:19ff:fee6:109e/64 assigned by NDisc. Feb 14 00:20:57.957415 systemd-networkd[1430]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 14 00:20:58.210971 kubelet[1602]: E0214 00:20:58.210656 1602 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 00:20:58.213618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 00:20:58.213942 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 00:20:58.214716 systemd[1]: kubelet.service: Consumed 1.043s CPU time. Feb 14 00:20:58.458912 systemd[1]: Started sshd@1-10.230.16.158:22-147.75.109.163:43030.service - OpenSSH per-connection server daemon (147.75.109.163:43030). Feb 14 00:20:59.374494 sshd[1624]: Accepted publickey for core from 147.75.109.163 port 43030 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:20:59.377210 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:20:59.384992 systemd-logind[1482]: New session 2 of user core. Feb 14 00:20:59.393702 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 14 00:21:00.015309 sshd[1624]: pam_unix(sshd:session): session closed for user core Feb 14 00:21:00.020924 systemd[1]: sshd@1-10.230.16.158:22-147.75.109.163:43030.service: Deactivated successfully. Feb 14 00:21:00.023713 systemd[1]: session-2.scope: Deactivated successfully. Feb 14 00:21:00.025183 systemd-logind[1482]: Session 2 logged out. Waiting for processes to exit. Feb 14 00:21:00.027334 systemd-logind[1482]: Removed session 2. Feb 14 00:21:00.177918 systemd[1]: Started sshd@2-10.230.16.158:22-147.75.109.163:49494.service - OpenSSH per-connection server daemon (147.75.109.163:49494). Feb 14 00:21:01.077752 sshd[1633]: Accepted publickey for core from 147.75.109.163 port 49494 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:21:01.080790 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:21:01.088689 systemd-logind[1482]: New session 3 of user core. Feb 14 00:21:01.099691 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 14 00:21:01.701157 login[1591]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 14 00:21:01.704200 login[1592]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 14 00:21:01.705863 sshd[1633]: pam_unix(sshd:session): session closed for user core Feb 14 00:21:01.710759 systemd-logind[1482]: New session 4 of user core. Feb 14 00:21:01.721053 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 14 00:21:01.722108 systemd[1]: sshd@2-10.230.16.158:22-147.75.109.163:49494.service: Deactivated successfully. Feb 14 00:21:01.724837 systemd[1]: session-3.scope: Deactivated successfully. Feb 14 00:21:01.730127 systemd-logind[1482]: New session 5 of user core. Feb 14 00:21:01.741794 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 14 00:21:01.743406 systemd-logind[1482]: Session 3 logged out. Waiting for processes to exit. Feb 14 00:21:01.748530 systemd-logind[1482]: Removed session 3. Feb 14 00:21:02.634863 coreos-metadata[1473]: Feb 14 00:21:02.634 WARN failed to locate config-drive, using the metadata service API instead Feb 14 00:21:02.660907 coreos-metadata[1473]: Feb 14 00:21:02.660 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 14 00:21:02.669406 coreos-metadata[1473]: Feb 14 00:21:02.669 INFO Fetch failed with 404: resource not found Feb 14 00:21:02.669406 coreos-metadata[1473]: Feb 14 00:21:02.669 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 14 00:21:02.670202 coreos-metadata[1473]: Feb 14 00:21:02.670 INFO Fetch successful Feb 14 00:21:02.670387 coreos-metadata[1473]: Feb 14 00:21:02.670 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 14 00:21:02.682859 coreos-metadata[1473]: Feb 14 00:21:02.682 INFO Fetch successful Feb 14 00:21:02.683107 coreos-metadata[1473]: Feb 14 00:21:02.683 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 14 00:21:02.696577 coreos-metadata[1473]: Feb 14 00:21:02.696 INFO Fetch successful Feb 14 00:21:02.696803 coreos-metadata[1473]: Feb 14 00:21:02.696 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 14 00:21:02.712799 coreos-metadata[1473]: Feb 14 00:21:02.712 INFO Fetch successful Feb 14 00:21:02.713083 coreos-metadata[1473]: Feb 14 00:21:02.713 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 14 00:21:02.734751 coreos-metadata[1473]: Feb 14 00:21:02.734 INFO Fetch successful Feb 14 00:21:02.774470 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 14 00:21:02.775541 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 14 00:21:03.167148 coreos-metadata[1540]: Feb 14 00:21:03.167 WARN failed to locate config-drive, using the metadata service API instead Feb 14 00:21:03.190823 coreos-metadata[1540]: Feb 14 00:21:03.190 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 14 00:21:03.217833 coreos-metadata[1540]: Feb 14 00:21:03.217 INFO Fetch successful Feb 14 00:21:03.218145 coreos-metadata[1540]: Feb 14 00:21:03.217 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 14 00:21:03.247649 coreos-metadata[1540]: Feb 14 00:21:03.247 INFO Fetch successful Feb 14 00:21:03.254566 unknown[1540]: wrote ssh authorized keys file for user: core Feb 14 00:21:03.281501 update-ssh-keys[1674]: Updated "/home/core/.ssh/authorized_keys" Feb 14 00:21:03.282005 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 14 00:21:03.284529 systemd[1]: Finished sshkeys.service. Feb 14 00:21:03.287276 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 14 00:21:03.290490 systemd[1]: Startup finished in 1.496s (kernel) + 20.732s (initrd) + 11.852s (userspace) = 34.082s. Feb 14 00:21:08.464624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 14 00:21:08.481767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:21:08.709707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:21:08.724162 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 00:21:08.787778 kubelet[1686]: E0214 00:21:08.787681 1686 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 00:21:08.792003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 00:21:08.792270 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 00:21:11.967702 systemd[1]: Started sshd@3-10.230.16.158:22-147.75.109.163:60472.service - OpenSSH per-connection server daemon (147.75.109.163:60472). Feb 14 00:21:12.865880 sshd[1694]: Accepted publickey for core from 147.75.109.163 port 60472 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:21:12.868876 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:21:12.877752 systemd-logind[1482]: New session 6 of user core. Feb 14 00:21:12.893637 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 14 00:21:13.486735 sshd[1694]: pam_unix(sshd:session): session closed for user core Feb 14 00:21:13.491924 systemd[1]: sshd@3-10.230.16.158:22-147.75.109.163:60472.service: Deactivated successfully. Feb 14 00:21:13.494301 systemd[1]: session-6.scope: Deactivated successfully. Feb 14 00:21:13.495775 systemd-logind[1482]: Session 6 logged out. Waiting for processes to exit. Feb 14 00:21:13.497195 systemd-logind[1482]: Removed session 6. Feb 14 00:21:13.640218 systemd[1]: Started sshd@4-10.230.16.158:22-147.75.109.163:60482.service - OpenSSH per-connection server daemon (147.75.109.163:60482). Feb 14 00:21:14.546450 sshd[1701]: Accepted publickey for core from 147.75.109.163 port 60482 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:21:14.548666 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:21:14.555163 systemd-logind[1482]: New session 7 of user core. Feb 14 00:21:14.566555 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 14 00:21:15.160492 sshd[1701]: pam_unix(sshd:session): session closed for user core Feb 14 00:21:15.165175 systemd[1]: sshd@4-10.230.16.158:22-147.75.109.163:60482.service: Deactivated successfully. Feb 14 00:21:15.167235 systemd[1]: session-7.scope: Deactivated successfully. Feb 14 00:21:15.168211 systemd-logind[1482]: Session 7 logged out. Waiting for processes to exit. Feb 14 00:21:15.169527 systemd-logind[1482]: Removed session 7. Feb 14 00:21:15.323826 systemd[1]: Started sshd@5-10.230.16.158:22-147.75.109.163:60498.service - OpenSSH per-connection server daemon (147.75.109.163:60498). Feb 14 00:21:16.210340 sshd[1708]: Accepted publickey for core from 147.75.109.163 port 60498 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:21:16.212533 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:21:16.221211 systemd-logind[1482]: New session 8 of user core. Feb 14 00:21:16.231649 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 14 00:21:16.834128 sshd[1708]: pam_unix(sshd:session): session closed for user core Feb 14 00:21:16.839311 systemd[1]: sshd@5-10.230.16.158:22-147.75.109.163:60498.service: Deactivated successfully. Feb 14 00:21:16.841571 systemd[1]: session-8.scope: Deactivated successfully. Feb 14 00:21:16.842602 systemd-logind[1482]: Session 8 logged out. Waiting for processes to exit. Feb 14 00:21:16.843906 systemd-logind[1482]: Removed session 8. Feb 14 00:21:16.991731 systemd[1]: Started sshd@6-10.230.16.158:22-147.75.109.163:60512.service - OpenSSH per-connection server daemon (147.75.109.163:60512). Feb 14 00:21:17.875980 sshd[1715]: Accepted publickey for core from 147.75.109.163 port 60512 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:21:17.878141 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:21:17.886046 systemd-logind[1482]: New session 9 of user core. Feb 14 00:21:17.892577 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 14 00:21:18.364401 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 14 00:21:18.364886 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 00:21:18.382751 sudo[1718]: pam_unix(sudo:session): session closed for user root Feb 14 00:21:18.526555 sshd[1715]: pam_unix(sshd:session): session closed for user core Feb 14 00:21:18.531508 systemd[1]: sshd@6-10.230.16.158:22-147.75.109.163:60512.service: Deactivated successfully. Feb 14 00:21:18.533667 systemd[1]: session-9.scope: Deactivated successfully. Feb 14 00:21:18.534637 systemd-logind[1482]: Session 9 logged out. Waiting for processes to exit. Feb 14 00:21:18.536259 systemd-logind[1482]: Removed session 9. Feb 14 00:21:18.689922 systemd[1]: Started sshd@7-10.230.16.158:22-147.75.109.163:60514.service - OpenSSH per-connection server daemon (147.75.109.163:60514). Feb 14 00:21:19.042925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 14 00:21:19.058654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:21:19.268978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:21:19.278876 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 00:21:19.359674 kubelet[1733]: E0214 00:21:19.359468 1733 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 00:21:19.363168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 00:21:19.363444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 00:21:19.584315 sshd[1723]: Accepted publickey for core from 147.75.109.163 port 60514 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:21:19.586365 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:21:19.594406 systemd-logind[1482]: New session 10 of user core. Feb 14 00:21:19.604599 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 14 00:21:20.066448 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 14 00:21:20.066990 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 00:21:20.072289 sudo[1743]: pam_unix(sudo:session): session closed for user root Feb 14 00:21:20.080178 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 14 00:21:20.080664 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 00:21:20.105955 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 14 00:21:20.107724 auditctl[1746]: No rules Feb 14 00:21:20.108263 systemd[1]: audit-rules.service: Deactivated successfully. Feb 14 00:21:20.108619 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 14 00:21:20.112064 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 14 00:21:20.159255 augenrules[1764]: No rules Feb 14 00:21:20.160155 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 14 00:21:20.161305 sudo[1742]: pam_unix(sudo:session): session closed for user root Feb 14 00:21:20.306818 sshd[1723]: pam_unix(sshd:session): session closed for user core Feb 14 00:21:20.310957 systemd-logind[1482]: Session 10 logged out. Waiting for processes to exit. Feb 14 00:21:20.312153 systemd[1]: sshd@7-10.230.16.158:22-147.75.109.163:60514.service: Deactivated successfully. Feb 14 00:21:20.314571 systemd[1]: session-10.scope: Deactivated successfully. Feb 14 00:21:20.316773 systemd-logind[1482]: Removed session 10. Feb 14 00:21:20.458897 systemd[1]: Started sshd@8-10.230.16.158:22-147.75.109.163:52108.service - OpenSSH per-connection server daemon (147.75.109.163:52108). Feb 14 00:21:21.361530 sshd[1772]: Accepted publickey for core from 147.75.109.163 port 52108 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:21:21.363845 sshd[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:21:21.373111 systemd-logind[1482]: New session 11 of user core. Feb 14 00:21:21.376634 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 14 00:21:21.841195 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 14 00:21:21.842394 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 00:21:22.321728 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 14 00:21:22.335084 (dockerd)[1791]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 14 00:21:22.805956 dockerd[1791]: time="2025-02-14T00:21:22.805841181Z" level=info msg="Starting up" Feb 14 00:21:22.964563 dockerd[1791]: time="2025-02-14T00:21:22.964164156Z" level=info msg="Loading containers: start." Feb 14 00:21:23.111816 kernel: Initializing XFRM netlink socket Feb 14 00:21:23.211170 systemd-networkd[1430]: docker0: Link UP Feb 14 00:21:23.232452 dockerd[1791]: time="2025-02-14T00:21:23.232222266Z" level=info msg="Loading containers: done." Feb 14 00:21:23.255132 dockerd[1791]: time="2025-02-14T00:21:23.254220123Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 14 00:21:23.255083 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2437695303-merged.mount: Deactivated successfully. Feb 14 00:21:23.256256 dockerd[1791]: time="2025-02-14T00:21:23.255755655Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 14 00:21:23.256256 dockerd[1791]: time="2025-02-14T00:21:23.255929984Z" level=info msg="Daemon has completed initialization" Feb 14 00:21:23.291929 dockerd[1791]: time="2025-02-14T00:21:23.291819940Z" level=info msg="API listen on /run/docker.sock" Feb 14 00:21:23.292679 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 14 00:21:24.747011 containerd[1503]: time="2025-02-14T00:21:24.746895600Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 14 00:21:25.361541 systemd[1]: Started sshd@9-10.230.16.158:22-202.72.235.223:43776.service - OpenSSH per-connection server daemon (202.72.235.223:43776). Feb 14 00:21:25.753019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559258918.mount: Deactivated successfully. Feb 14 00:21:26.552784 sshd[1940]: Invalid user rich from 202.72.235.223 port 43776 Feb 14 00:21:26.772432 sshd[1940]: Received disconnect from 202.72.235.223 port 43776:11: Bye Bye [preauth] Feb 14 00:21:26.772432 sshd[1940]: Disconnected from invalid user rich 202.72.235.223 port 43776 [preauth] Feb 14 00:21:26.775394 systemd[1]: sshd@9-10.230.16.158:22-202.72.235.223:43776.service: Deactivated successfully. Feb 14 00:21:27.977173 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 14 00:21:28.665093 containerd[1503]: time="2025-02-14T00:21:28.665022432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:28.666604 containerd[1503]: time="2025-02-14T00:21:28.666545387Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678222" Feb 14 00:21:28.667438 containerd[1503]: time="2025-02-14T00:21:28.667363464Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:28.671656 containerd[1503]: time="2025-02-14T00:21:28.671620273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:28.673961 containerd[1503]: time="2025-02-14T00:21:28.673308604Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 3.926287759s" Feb 14 00:21:28.673961 containerd[1503]: time="2025-02-14T00:21:28.673403950Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 14 00:21:28.705989 containerd[1503]: time="2025-02-14T00:21:28.705936506Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 14 00:21:29.534074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 14 00:21:29.543689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:21:29.700407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:21:29.712948 (kubelet)[2013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 00:21:29.773865 kubelet[2013]: E0214 00:21:29.773788 2013 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 00:21:29.776795 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 00:21:29.777073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 00:21:31.736388 containerd[1503]: time="2025-02-14T00:21:31.736190490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:31.737675 containerd[1503]: time="2025-02-14T00:21:31.737591263Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611553" Feb 14 00:21:31.738504 containerd[1503]: time="2025-02-14T00:21:31.738431908Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:31.742580 containerd[1503]: time="2025-02-14T00:21:31.742520559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:31.747376 containerd[1503]: time="2025-02-14T00:21:31.746193813Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 3.039961126s" Feb 14 00:21:31.747376 containerd[1503]: time="2025-02-14T00:21:31.746272162Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 14 00:21:31.780071 containerd[1503]: time="2025-02-14T00:21:31.780020573Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 14 00:21:33.442198 containerd[1503]: time="2025-02-14T00:21:33.442043935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:33.443877 containerd[1503]: time="2025-02-14T00:21:33.443831832Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782138" Feb 14 00:21:33.444836 containerd[1503]: time="2025-02-14T00:21:33.444421640Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:33.448542 containerd[1503]: time="2025-02-14T00:21:33.448463789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:33.450695 containerd[1503]: time="2025-02-14T00:21:33.450386578Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.670308065s" Feb 14 00:21:33.450695 containerd[1503]: time="2025-02-14T00:21:33.450455799Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 14 00:21:33.480655 containerd[1503]: time="2025-02-14T00:21:33.480537094Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 14 00:21:35.305026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423381900.mount: Deactivated successfully. Feb 14 00:21:35.953622 containerd[1503]: time="2025-02-14T00:21:35.953503852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:35.954873 containerd[1503]: time="2025-02-14T00:21:35.954686585Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057866" Feb 14 00:21:35.955681 containerd[1503]: time="2025-02-14T00:21:35.955608514Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:35.958542 containerd[1503]: time="2025-02-14T00:21:35.958460884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:35.959782 containerd[1503]: time="2025-02-14T00:21:35.959582911Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.478976194s" Feb 14 00:21:35.959782 containerd[1503]: time="2025-02-14T00:21:35.959631733Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 14 00:21:35.992886 containerd[1503]: time="2025-02-14T00:21:35.992826398Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 14 00:21:36.633817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266236041.mount: Deactivated successfully. Feb 14 00:21:38.090809 containerd[1503]: time="2025-02-14T00:21:38.090525592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:38.092157 containerd[1503]: time="2025-02-14T00:21:38.092092743Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Feb 14 00:21:38.092972 containerd[1503]: time="2025-02-14T00:21:38.092880246Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:38.100082 containerd[1503]: time="2025-02-14T00:21:38.099988195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:38.101632 containerd[1503]: time="2025-02-14T00:21:38.100999401Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.107850781s" Feb 14 00:21:38.101632 containerd[1503]: time="2025-02-14T00:21:38.101047500Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 14 00:21:38.135456 containerd[1503]: time="2025-02-14T00:21:38.135326548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 14 00:21:38.942124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488180026.mount: Deactivated successfully. Feb 14 00:21:38.948076 containerd[1503]: time="2025-02-14T00:21:38.948010341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:38.949196 containerd[1503]: time="2025-02-14T00:21:38.949150730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Feb 14 00:21:38.950021 containerd[1503]: time="2025-02-14T00:21:38.949947868Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:38.953106 containerd[1503]: time="2025-02-14T00:21:38.953034856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:38.954604 containerd[1503]: time="2025-02-14T00:21:38.954380343Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 818.961474ms" Feb 14 00:21:38.954604 containerd[1503]: time="2025-02-14T00:21:38.954423912Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 14 00:21:38.984860 containerd[1503]: time="2025-02-14T00:21:38.984809435Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 14 00:21:39.653725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3683278238.mount: Deactivated successfully. Feb 14 00:21:39.783888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 14 00:21:39.794649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:21:40.245575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:21:40.255794 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 00:21:40.406685 kubelet[2134]: E0214 00:21:40.406218 2134 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 00:21:40.411383 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 00:21:40.411834 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 00:21:41.223469 update_engine[1486]: I20250214 00:21:41.222062 1486 update_attempter.cc:509] Updating boot flags... Feb 14 00:21:41.298418 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2171) Feb 14 00:21:41.382407 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2175) Feb 14 00:21:41.481594 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2175) Feb 14 00:21:44.943383 containerd[1503]: time="2025-02-14T00:21:44.942756345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:44.944940 containerd[1503]: time="2025-02-14T00:21:44.944850228Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Feb 14 00:21:44.946137 containerd[1503]: time="2025-02-14T00:21:44.946064249Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:44.950389 containerd[1503]: time="2025-02-14T00:21:44.950224273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:21:44.952432 containerd[1503]: time="2025-02-14T00:21:44.952018531Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.966950633s" Feb 14 00:21:44.952432 containerd[1503]: time="2025-02-14T00:21:44.952100728Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 14 00:21:49.925218 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:21:49.945102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:21:49.975724 systemd[1]: Reloading requested from client PID 2245 ('systemctl') (unit session-11.scope)... Feb 14 00:21:49.976046 systemd[1]: Reloading... Feb 14 00:21:50.167397 zram_generator::config[2284]: No configuration found. Feb 14 00:21:50.328392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 00:21:50.437775 systemd[1]: Reloading finished in 460 ms. Feb 14 00:21:50.505086 systemd[1]: kubelet.service: Deactivated successfully. Feb 14 00:21:50.505716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:21:50.512730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:21:50.678973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:21:50.691074 (kubelet)[2352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 00:21:50.789623 kubelet[2352]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 00:21:50.789623 kubelet[2352]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 00:21:50.789623 kubelet[2352]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 00:21:50.790976 kubelet[2352]: I0214 00:21:50.790876 2352 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 00:21:51.389230 kubelet[2352]: I0214 00:21:51.388408 2352 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 14 00:21:51.389230 kubelet[2352]: I0214 00:21:51.388466 2352 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 00:21:51.389230 kubelet[2352]: I0214 00:21:51.389081 2352 server.go:927] "Client rotation is on, will bootstrap in background" Feb 14 00:21:51.418578 kubelet[2352]: I0214 00:21:51.418505 2352 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 00:21:51.419777 kubelet[2352]: E0214 00:21:51.419705 2352 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.16.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:51.439162 kubelet[2352]: I0214 00:21:51.439102 2352 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 00:21:51.439647 kubelet[2352]: I0214 00:21:51.439579 2352 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 00:21:51.441317 kubelet[2352]: I0214 00:21:51.439644 2352 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-skbpq.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 14 00:21:51.441317 kubelet[2352]: I0214 00:21:51.441303 2352 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 00:21:51.441317 kubelet[2352]: I0214 00:21:51.441323 2352 container_manager_linux.go:301] "Creating device plugin manager" Feb 14 00:21:51.441747 kubelet[2352]: I0214 00:21:51.441606 2352 state_mem.go:36] "Initialized new in-memory state store" Feb 14 00:21:51.442657 kubelet[2352]: I0214 00:21:51.442632 2352 kubelet.go:400] "Attempting to sync node with API server" Feb 14 00:21:51.442760 kubelet[2352]: I0214 00:21:51.442662 2352 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 00:21:51.443693 kubelet[2352]: I0214 00:21:51.443405 2352 kubelet.go:312] "Adding apiserver pod source" Feb 14 00:21:51.443693 kubelet[2352]: I0214 00:21:51.443455 2352 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 00:21:51.448684 kubelet[2352]: W0214 00:21:51.448573 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.16.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-skbpq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:51.448793 kubelet[2352]: E0214 00:21:51.448684 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.16.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-skbpq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:51.448857 kubelet[2352]: W0214 00:21:51.448792 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.16.158:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:51.448928 kubelet[2352]: E0214 00:21:51.448865 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.16.158:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:51.449918 kubelet[2352]: I0214 00:21:51.449468 2352 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 00:21:51.451159 kubelet[2352]: I0214 00:21:51.451107 2352 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 00:21:51.451251 kubelet[2352]: W0214 00:21:51.451230 2352 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 14 00:21:51.453826 kubelet[2352]: I0214 00:21:51.453779 2352 server.go:1264] "Started kubelet" Feb 14 00:21:51.456454 kubelet[2352]: I0214 00:21:51.456414 2352 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 00:21:51.463767 kubelet[2352]: I0214 00:21:51.462269 2352 server.go:455] "Adding debug handlers to kubelet server" Feb 14 00:21:51.467793 kubelet[2352]: I0214 00:21:51.467154 2352 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 00:21:51.467793 kubelet[2352]: I0214 00:21:51.467663 2352 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 00:21:51.472418 kubelet[2352]: I0214 00:21:51.470977 2352 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 00:21:51.472418 kubelet[2352]: E0214 00:21:51.468840 2352 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.16.158:6443/api/v1/namespaces/default/events\": dial tcp 10.230.16.158:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-skbpq.gb1.brightbox.com.1823eb3912a6f9bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-skbpq.gb1.brightbox.com,UID:srv-skbpq.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-skbpq.gb1.brightbox.com,},FirstTimestamp:2025-02-14 00:21:51.453739452 +0000 UTC m=+0.754146415,LastTimestamp:2025-02-14 00:21:51.453739452 +0000 UTC m=+0.754146415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-skbpq.gb1.brightbox.com,}" Feb 14 00:21:51.479253 kubelet[2352]: I0214 00:21:51.479222 2352 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 14 00:21:51.482585 kubelet[2352]: I0214 00:21:51.482557 2352 factory.go:221] Registration of the systemd container factory successfully Feb 14 00:21:51.482836 kubelet[2352]: I0214 00:21:51.482804 2352 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 00:21:51.483418 kubelet[2352]: E0214 00:21:51.483382 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.16.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-skbpq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.16.158:6443: connect: connection refused" interval="200ms" Feb 14 00:21:51.486025 kubelet[2352]: I0214 00:21:51.485998 2352 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 14 00:21:51.486758 kubelet[2352]: W0214 00:21:51.486710 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.16.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:51.486928 kubelet[2352]: E0214 00:21:51.486902 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.16.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:51.487318 kubelet[2352]: I0214 00:21:51.487292 2352 factory.go:221] Registration of the containerd container factory successfully Feb 14 00:21:51.488023 kubelet[2352]: I0214 00:21:51.487993 2352 reconciler.go:26] "Reconciler: start to sync state" Feb 14 00:21:51.497510 kubelet[2352]: E0214 00:21:51.497468 2352 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 14 00:21:51.503102 kubelet[2352]: I0214 00:21:51.503032 2352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 00:21:51.505801 kubelet[2352]: I0214 00:21:51.505762 2352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 00:21:51.505876 kubelet[2352]: I0214 00:21:51.505831 2352 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 00:21:51.505876 kubelet[2352]: I0214 00:21:51.505870 2352 kubelet.go:2337] "Starting kubelet main sync loop" Feb 14 00:21:51.505996 kubelet[2352]: E0214 00:21:51.505964 2352 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 00:21:51.516918 kubelet[2352]: W0214 00:21:51.516716 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.16.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:51.516918 kubelet[2352]: E0214 00:21:51.516841 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.16.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:51.529743 kubelet[2352]: I0214 00:21:51.529699 2352 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 14 00:21:51.529743 kubelet[2352]: I0214 00:21:51.529733 2352 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 14 00:21:51.529991 kubelet[2352]: I0214 00:21:51.529775 2352 state_mem.go:36] "Initialized new in-memory state store" Feb 14 00:21:51.532104 kubelet[2352]: I0214 00:21:51.532079 2352 policy_none.go:49] "None policy: Start" Feb 14 00:21:51.533381 kubelet[2352]: I0214 00:21:51.533218 2352 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 00:21:51.533381 kubelet[2352]: I0214 00:21:51.533270 2352 state_mem.go:35] "Initializing new in-memory state store" Feb 14 00:21:51.542827 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 14 00:21:51.553672 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 14 00:21:51.558721 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 14 00:21:51.574193 kubelet[2352]: I0214 00:21:51.573492 2352 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 00:21:51.574193 kubelet[2352]: I0214 00:21:51.573846 2352 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 00:21:51.574193 kubelet[2352]: I0214 00:21:51.574091 2352 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 00:21:51.577441 kubelet[2352]: E0214 00:21:51.577391 2352 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-skbpq.gb1.brightbox.com\" not found" Feb 14 00:21:51.582945 kubelet[2352]: I0214 00:21:51.582869 2352 kubelet_node_status.go:73] "Attempting to register node" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.583469 kubelet[2352]: E0214 00:21:51.583428 2352 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.16.158:6443/api/v1/nodes\": dial tcp 10.230.16.158:6443: connect: connection refused" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.606185 kubelet[2352]: I0214 00:21:51.606099 2352 topology_manager.go:215] "Topology Admit Handler" podUID="6c883bab5539d68e06502f65b344e057" podNamespace="kube-system" podName="kube-apiserver-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.609056 kubelet[2352]: I0214 00:21:51.608796 2352 topology_manager.go:215] "Topology Admit Handler" podUID="cd437561a914661ab7ee92bd5f49ea11" podNamespace="kube-system" podName="kube-controller-manager-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.611149 kubelet[2352]: I0214 00:21:51.610892 2352 topology_manager.go:215] "Topology Admit Handler" podUID="c007d76a2c9cda8db0f544a5df0dbb8c" podNamespace="kube-system" podName="kube-scheduler-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.624631 systemd[1]: Created slice kubepods-burstable-podcd437561a914661ab7ee92bd5f49ea11.slice - libcontainer container kubepods-burstable-podcd437561a914661ab7ee92bd5f49ea11.slice. Feb 14 00:21:51.642436 systemd[1]: Created slice kubepods-burstable-pod6c883bab5539d68e06502f65b344e057.slice - libcontainer container kubepods-burstable-pod6c883bab5539d68e06502f65b344e057.slice. Feb 14 00:21:51.655460 systemd[1]: Created slice kubepods-burstable-podc007d76a2c9cda8db0f544a5df0dbb8c.slice - libcontainer container kubepods-burstable-podc007d76a2c9cda8db0f544a5df0dbb8c.slice. Feb 14 00:21:51.684610 kubelet[2352]: E0214 00:21:51.684514 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.16.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-skbpq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.16.158:6443: connect: connection refused" interval="400ms" Feb 14 00:21:51.788707 kubelet[2352]: I0214 00:21:51.788328 2352 kubelet_node_status.go:73] "Attempting to register node" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.789084 kubelet[2352]: E0214 00:21:51.788936 2352 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.16.158:6443/api/v1/nodes\": dial tcp 10.230.16.158:6443: connect: connection refused" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.790661 kubelet[2352]: I0214 00:21:51.790216 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd437561a914661ab7ee92bd5f49ea11-ca-certs\") pod \"kube-controller-manager-srv-skbpq.gb1.brightbox.com\" (UID: \"cd437561a914661ab7ee92bd5f49ea11\") " pod="kube-system/kube-controller-manager-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.790661 kubelet[2352]: I0214 00:21:51.790279 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd437561a914661ab7ee92bd5f49ea11-k8s-certs\") pod \"kube-controller-manager-srv-skbpq.gb1.brightbox.com\" (UID: \"cd437561a914661ab7ee92bd5f49ea11\") " pod="kube-system/kube-controller-manager-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.790661 kubelet[2352]: I0214 00:21:51.790314 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd437561a914661ab7ee92bd5f49ea11-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-skbpq.gb1.brightbox.com\" (UID: \"cd437561a914661ab7ee92bd5f49ea11\") " pod="kube-system/kube-controller-manager-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.790661 kubelet[2352]: I0214 00:21:51.790374 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c883bab5539d68e06502f65b344e057-ca-certs\") pod \"kube-apiserver-srv-skbpq.gb1.brightbox.com\" (UID: \"6c883bab5539d68e06502f65b344e057\") " pod="kube-system/kube-apiserver-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.790661 kubelet[2352]: I0214 00:21:51.790412 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c883bab5539d68e06502f65b344e057-k8s-certs\") pod \"kube-apiserver-srv-skbpq.gb1.brightbox.com\" (UID: \"6c883bab5539d68e06502f65b344e057\") " pod="kube-system/kube-apiserver-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.791439 kubelet[2352]: I0214 00:21:51.790440 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c883bab5539d68e06502f65b344e057-usr-share-ca-certificates\") pod \"kube-apiserver-srv-skbpq.gb1.brightbox.com\" (UID: \"6c883bab5539d68e06502f65b344e057\") " pod="kube-system/kube-apiserver-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.791439 kubelet[2352]: I0214 00:21:51.790471 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cd437561a914661ab7ee92bd5f49ea11-flexvolume-dir\") pod \"kube-controller-manager-srv-skbpq.gb1.brightbox.com\" (UID: \"cd437561a914661ab7ee92bd5f49ea11\") " pod="kube-system/kube-controller-manager-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.791439 kubelet[2352]: I0214 00:21:51.790500 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd437561a914661ab7ee92bd5f49ea11-kubeconfig\") pod \"kube-controller-manager-srv-skbpq.gb1.brightbox.com\" (UID: \"cd437561a914661ab7ee92bd5f49ea11\") " pod="kube-system/kube-controller-manager-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.791439 kubelet[2352]: I0214 00:21:51.790528 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c007d76a2c9cda8db0f544a5df0dbb8c-kubeconfig\") pod \"kube-scheduler-srv-skbpq.gb1.brightbox.com\" (UID: \"c007d76a2c9cda8db0f544a5df0dbb8c\") " pod="kube-system/kube-scheduler-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:51.940736 containerd[1503]: time="2025-02-14T00:21:51.940386776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-skbpq.gb1.brightbox.com,Uid:cd437561a914661ab7ee92bd5f49ea11,Namespace:kube-system,Attempt:0,}" Feb 14 00:21:51.955973 containerd[1503]: time="2025-02-14T00:21:51.955813692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-skbpq.gb1.brightbox.com,Uid:6c883bab5539d68e06502f65b344e057,Namespace:kube-system,Attempt:0,}" Feb 14 00:21:51.960879 containerd[1503]: time="2025-02-14T00:21:51.960550990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-skbpq.gb1.brightbox.com,Uid:c007d76a2c9cda8db0f544a5df0dbb8c,Namespace:kube-system,Attempt:0,}" Feb 14 00:21:52.085722 kubelet[2352]: E0214 00:21:52.085628 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.16.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-skbpq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.16.158:6443: connect: connection refused" interval="800ms" Feb 14 00:21:52.193406 kubelet[2352]: I0214 00:21:52.193181 2352 kubelet_node_status.go:73] "Attempting to register node" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:52.194114 kubelet[2352]: E0214 00:21:52.193733 2352 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.16.158:6443/api/v1/nodes\": dial tcp 10.230.16.158:6443: connect: connection refused" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:52.535166 kubelet[2352]: W0214 00:21:52.535009 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.16.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:52.535166 kubelet[2352]: E0214 00:21:52.535103 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.16.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:52.540863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount739758008.mount: Deactivated successfully. Feb 14 00:21:52.550220 containerd[1503]: time="2025-02-14T00:21:52.550165376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:21:52.551780 containerd[1503]: time="2025-02-14T00:21:52.551732578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 14 00:21:52.552002 containerd[1503]: time="2025-02-14T00:21:52.551969956Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:21:52.553181 containerd[1503]: time="2025-02-14T00:21:52.553140453Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 00:21:52.557332 containerd[1503]: time="2025-02-14T00:21:52.557264730Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:21:52.558551 containerd[1503]: time="2025-02-14T00:21:52.558391675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 00:21:52.559362 containerd[1503]: time="2025-02-14T00:21:52.559258379Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:21:52.564794 containerd[1503]: time="2025-02-14T00:21:52.564722556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:21:52.567377 containerd[1503]: time="2025-02-14T00:21:52.567016341Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 626.485411ms" Feb 14 00:21:52.569456 containerd[1503]: time="2025-02-14T00:21:52.569330930Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 608.694716ms" Feb 14 00:21:52.572406 containerd[1503]: time="2025-02-14T00:21:52.572328780Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.370112ms" Feb 14 00:21:52.575669 kubelet[2352]: W0214 00:21:52.575554 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.16.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-skbpq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:52.575780 kubelet[2352]: E0214 00:21:52.575691 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.16.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-skbpq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:52.707321 kubelet[2352]: W0214 00:21:52.707171 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.16.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:52.707321 kubelet[2352]: E0214 00:21:52.707253 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.16.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:52.845532 containerd[1503]: time="2025-02-14T00:21:52.845024084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:21:52.847598 containerd[1503]: time="2025-02-14T00:21:52.845930037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:21:52.849370 containerd[1503]: time="2025-02-14T00:21:52.848061982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:21:52.849370 containerd[1503]: time="2025-02-14T00:21:52.848697631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:21:52.858630 containerd[1503]: time="2025-02-14T00:21:52.856108378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:21:52.858630 containerd[1503]: time="2025-02-14T00:21:52.856206547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:21:52.858630 containerd[1503]: time="2025-02-14T00:21:52.856227112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:21:52.858630 containerd[1503]: time="2025-02-14T00:21:52.856472777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:21:52.859370 containerd[1503]: time="2025-02-14T00:21:52.859140227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:21:52.862076 containerd[1503]: time="2025-02-14T00:21:52.859241583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:21:52.862076 containerd[1503]: time="2025-02-14T00:21:52.859274385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:21:52.862076 containerd[1503]: time="2025-02-14T00:21:52.859732123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:21:52.892493 kubelet[2352]: E0214 00:21:52.892401 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.16.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-skbpq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.16.158:6443: connect: connection refused" interval="1.6s" Feb 14 00:21:52.895598 systemd[1]: Started cri-containerd-0cbc0de442db806ae0f18e349048cfe7f5ee64ed89848a6ba989bc80188a0cc3.scope - libcontainer container 0cbc0de442db806ae0f18e349048cfe7f5ee64ed89848a6ba989bc80188a0cc3. Feb 14 00:21:52.904758 systemd[1]: Started cri-containerd-c780081fe5ee3fbaa51c08815b44f1e64ab816a6ca119aaf1a32ca64539728e2.scope - libcontainer container c780081fe5ee3fbaa51c08815b44f1e64ab816a6ca119aaf1a32ca64539728e2. Feb 14 00:21:52.927611 systemd[1]: Started cri-containerd-1da3ddd116d1168dc788289f6221a8dccb85da5a7563b0a1bbfae59a54fec4db.scope - libcontainer container 1da3ddd116d1168dc788289f6221a8dccb85da5a7563b0a1bbfae59a54fec4db. Feb 14 00:21:53.005334 kubelet[2352]: I0214 00:21:53.004370 2352 kubelet_node_status.go:73] "Attempting to register node" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:53.006923 kubelet[2352]: E0214 00:21:53.006278 2352 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.16.158:6443/api/v1/nodes\": dial tcp 10.230.16.158:6443: connect: connection refused" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:53.006923 kubelet[2352]: W0214 00:21:53.006645 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.16.158:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:53.006923 kubelet[2352]: E0214 00:21:53.006729 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.16.158:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:53.041147 containerd[1503]: time="2025-02-14T00:21:53.040959259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-skbpq.gb1.brightbox.com,Uid:cd437561a914661ab7ee92bd5f49ea11,Namespace:kube-system,Attempt:0,} returns sandbox id \"c780081fe5ee3fbaa51c08815b44f1e64ab816a6ca119aaf1a32ca64539728e2\"" Feb 14 00:21:53.044889 containerd[1503]: time="2025-02-14T00:21:53.043891712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-skbpq.gb1.brightbox.com,Uid:6c883bab5539d68e06502f65b344e057,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cbc0de442db806ae0f18e349048cfe7f5ee64ed89848a6ba989bc80188a0cc3\"" Feb 14 00:21:53.057611 containerd[1503]: time="2025-02-14T00:21:53.057544590Z" level=info msg="CreateContainer within sandbox \"c780081fe5ee3fbaa51c08815b44f1e64ab816a6ca119aaf1a32ca64539728e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 14 00:21:53.058064 containerd[1503]: time="2025-02-14T00:21:53.057765122Z" level=info msg="CreateContainer within sandbox \"0cbc0de442db806ae0f18e349048cfe7f5ee64ed89848a6ba989bc80188a0cc3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 14 00:21:53.079049 containerd[1503]: time="2025-02-14T00:21:53.078983921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-skbpq.gb1.brightbox.com,Uid:c007d76a2c9cda8db0f544a5df0dbb8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1da3ddd116d1168dc788289f6221a8dccb85da5a7563b0a1bbfae59a54fec4db\"" Feb 14 00:21:53.083497 containerd[1503]: time="2025-02-14T00:21:53.083419489Z" level=info msg="CreateContainer within sandbox \"1da3ddd116d1168dc788289f6221a8dccb85da5a7563b0a1bbfae59a54fec4db\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 14 00:21:53.087390 containerd[1503]: time="2025-02-14T00:21:53.087315319Z" level=info msg="CreateContainer within sandbox \"0cbc0de442db806ae0f18e349048cfe7f5ee64ed89848a6ba989bc80188a0cc3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"caaea4f6638add0287c20f933374e7d8da6c1d46efcf17d547b9091201e2c595\"" Feb 14 00:21:53.089412 containerd[1503]: time="2025-02-14T00:21:53.088547039Z" level=info msg="StartContainer for \"caaea4f6638add0287c20f933374e7d8da6c1d46efcf17d547b9091201e2c595\"" Feb 14 00:21:53.107006 containerd[1503]: time="2025-02-14T00:21:53.105785471Z" level=info msg="CreateContainer within sandbox \"c780081fe5ee3fbaa51c08815b44f1e64ab816a6ca119aaf1a32ca64539728e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e1fc66f17ff3aca6968b50102f978da8b3df728297486d4256266e94cb3e188e\"" Feb 14 00:21:53.108507 containerd[1503]: time="2025-02-14T00:21:53.108222817Z" level=info msg="StartContainer for \"e1fc66f17ff3aca6968b50102f978da8b3df728297486d4256266e94cb3e188e\"" Feb 14 00:21:53.112889 containerd[1503]: time="2025-02-14T00:21:53.112786466Z" level=info msg="CreateContainer within sandbox \"1da3ddd116d1168dc788289f6221a8dccb85da5a7563b0a1bbfae59a54fec4db\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a5cf396717f5b2fd18a2e46b001a72f08bd68b261f1f030748caa7d140b0f0e1\"" Feb 14 00:21:53.113807 containerd[1503]: time="2025-02-14T00:21:53.113408418Z" level=info msg="StartContainer for \"a5cf396717f5b2fd18a2e46b001a72f08bd68b261f1f030748caa7d140b0f0e1\"" Feb 14 00:21:53.152550 systemd[1]: Started cri-containerd-caaea4f6638add0287c20f933374e7d8da6c1d46efcf17d547b9091201e2c595.scope - libcontainer container caaea4f6638add0287c20f933374e7d8da6c1d46efcf17d547b9091201e2c595. Feb 14 00:21:53.186721 systemd[1]: Started cri-containerd-a5cf396717f5b2fd18a2e46b001a72f08bd68b261f1f030748caa7d140b0f0e1.scope - libcontainer container a5cf396717f5b2fd18a2e46b001a72f08bd68b261f1f030748caa7d140b0f0e1. Feb 14 00:21:53.201555 systemd[1]: Started cri-containerd-e1fc66f17ff3aca6968b50102f978da8b3df728297486d4256266e94cb3e188e.scope - libcontainer container e1fc66f17ff3aca6968b50102f978da8b3df728297486d4256266e94cb3e188e. Feb 14 00:21:53.281183 containerd[1503]: time="2025-02-14T00:21:53.281118483Z" level=info msg="StartContainer for \"caaea4f6638add0287c20f933374e7d8da6c1d46efcf17d547b9091201e2c595\" returns successfully" Feb 14 00:21:53.296761 containerd[1503]: time="2025-02-14T00:21:53.296686066Z" level=info msg="StartContainer for \"a5cf396717f5b2fd18a2e46b001a72f08bd68b261f1f030748caa7d140b0f0e1\" returns successfully" Feb 14 00:21:53.324414 containerd[1503]: time="2025-02-14T00:21:53.324300626Z" level=info msg="StartContainer for \"e1fc66f17ff3aca6968b50102f978da8b3df728297486d4256266e94cb3e188e\" returns successfully" Feb 14 00:21:53.446245 kubelet[2352]: E0214 00:21:53.445308 2352 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.16.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.16.158:6443: connect: connection refused Feb 14 00:21:54.613004 kubelet[2352]: I0214 00:21:54.612913 2352 kubelet_node_status.go:73] "Attempting to register node" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:56.609165 kubelet[2352]: E0214 00:21:56.609090 2352 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-skbpq.gb1.brightbox.com\" not found" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:56.739383 kubelet[2352]: I0214 00:21:56.739111 2352 kubelet_node_status.go:76] "Successfully registered node" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:56.823617 kubelet[2352]: E0214 00:21:56.823549 2352 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-skbpq.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:57.453633 kubelet[2352]: I0214 00:21:57.453493 2352 apiserver.go:52] "Watching apiserver" Feb 14 00:21:57.486835 kubelet[2352]: I0214 00:21:57.486776 2352 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 14 00:21:58.692443 systemd[1]: Reloading requested from client PID 2627 ('systemctl') (unit session-11.scope)... Feb 14 00:21:58.692990 systemd[1]: Reloading... Feb 14 00:21:58.822388 zram_generator::config[2667]: No configuration found. Feb 14 00:21:59.003743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 00:21:59.131645 systemd[1]: Reloading finished in 437 ms. Feb 14 00:21:59.195944 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:21:59.197215 kubelet[2352]: E0214 00:21:59.195735 2352 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{srv-skbpq.gb1.brightbox.com.1823eb3912a6f9bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-skbpq.gb1.brightbox.com,UID:srv-skbpq.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-skbpq.gb1.brightbox.com,},FirstTimestamp:2025-02-14 00:21:51.453739452 +0000 UTC m=+0.754146415,LastTimestamp:2025-02-14 00:21:51.453739452 +0000 UTC m=+0.754146415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-skbpq.gb1.brightbox.com,}" Feb 14 00:21:59.212860 systemd[1]: kubelet.service: Deactivated successfully. Feb 14 00:21:59.213273 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:21:59.213394 systemd[1]: kubelet.service: Consumed 1.300s CPU time, 114.1M memory peak, 0B memory swap peak. Feb 14 00:21:59.221681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:21:59.420589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:21:59.429835 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 00:21:59.528200 kubelet[2729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 00:21:59.528200 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 00:21:59.528200 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 00:21:59.528982 kubelet[2729]: I0214 00:21:59.528305 2729 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 00:21:59.535199 kubelet[2729]: I0214 00:21:59.535150 2729 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 14 00:21:59.535199 kubelet[2729]: I0214 00:21:59.535193 2729 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 00:21:59.535467 kubelet[2729]: I0214 00:21:59.535432 2729 server.go:927] "Client rotation is on, will bootstrap in background" Feb 14 00:21:59.537212 kubelet[2729]: I0214 00:21:59.537179 2729 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 14 00:21:59.538866 kubelet[2729]: I0214 00:21:59.538577 2729 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 00:21:59.552416 kubelet[2729]: I0214 00:21:59.552337 2729 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 00:21:59.552855 kubelet[2729]: I0214 00:21:59.552791 2729 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 00:21:59.553170 kubelet[2729]: I0214 00:21:59.552853 2729 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-skbpq.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 14 00:21:59.553406 kubelet[2729]: I0214 00:21:59.553182 2729 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 00:21:59.553406 kubelet[2729]: I0214 00:21:59.553200 2729 container_manager_linux.go:301] "Creating device plugin manager" Feb 14 00:21:59.553406 kubelet[2729]: I0214 00:21:59.553283 2729 state_mem.go:36] "Initialized new in-memory state store" Feb 14 00:21:59.553577 kubelet[2729]: I0214 00:21:59.553473 2729 kubelet.go:400] "Attempting to sync node with API server" Feb 14 00:21:59.553577 kubelet[2729]: I0214 00:21:59.553496 2729 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 00:21:59.553577 kubelet[2729]: I0214 00:21:59.553537 2729 kubelet.go:312] "Adding apiserver pod source" Feb 14 00:21:59.553577 kubelet[2729]: I0214 00:21:59.553566 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 00:21:59.562328 kubelet[2729]: I0214 00:21:59.562227 2729 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 00:21:59.562600 kubelet[2729]: I0214 00:21:59.562576 2729 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 00:21:59.563263 kubelet[2729]: I0214 00:21:59.563210 2729 server.go:1264] "Started kubelet" Feb 14 00:21:59.565672 kubelet[2729]: I0214 00:21:59.565628 2729 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 00:21:59.566662 kubelet[2729]: I0214 00:21:59.566580 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 00:21:59.566977 kubelet[2729]: I0214 00:21:59.566928 2729 server.go:455] "Adding debug handlers to kubelet server" Feb 14 00:21:59.570372 kubelet[2729]: I0214 00:21:59.570294 2729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 00:21:59.571543 kubelet[2729]: I0214 00:21:59.571508 2729 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 00:21:59.590412 kubelet[2729]: I0214 00:21:59.589521 2729 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 14 00:21:59.590412 kubelet[2729]: I0214 00:21:59.590033 2729 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 14 00:21:59.590412 kubelet[2729]: I0214 00:21:59.590285 2729 reconciler.go:26] "Reconciler: start to sync state" Feb 14 00:21:59.594562 kubelet[2729]: I0214 00:21:59.594475 2729 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 00:21:59.599126 kubelet[2729]: I0214 00:21:59.599102 2729 factory.go:221] Registration of the containerd container factory successfully Feb 14 00:21:59.599308 kubelet[2729]: I0214 00:21:59.599289 2729 factory.go:221] Registration of the systemd container factory successfully Feb 14 00:21:59.601337 kubelet[2729]: I0214 00:21:59.601271 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 00:21:59.602866 kubelet[2729]: I0214 00:21:59.602835 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 00:21:59.602962 kubelet[2729]: I0214 00:21:59.602878 2729 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 00:21:59.602962 kubelet[2729]: I0214 00:21:59.602904 2729 kubelet.go:2337] "Starting kubelet main sync loop" Feb 14 00:21:59.603053 kubelet[2729]: E0214 00:21:59.602963 2729 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 00:21:59.603383 kubelet[2729]: E0214 00:21:59.603334 2729 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 14 00:21:59.668991 kubelet[2729]: I0214 00:21:59.668940 2729 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 14 00:21:59.668991 kubelet[2729]: I0214 00:21:59.668977 2729 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 14 00:21:59.668991 kubelet[2729]: I0214 00:21:59.669009 2729 state_mem.go:36] "Initialized new in-memory state store" Feb 14 00:21:59.669312 kubelet[2729]: I0214 00:21:59.669274 2729 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 14 00:21:59.670282 kubelet[2729]: I0214 00:21:59.669296 2729 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 14 00:21:59.670282 kubelet[2729]: I0214 00:21:59.669330 2729 policy_none.go:49] "None policy: Start" Feb 14 00:21:59.670569 kubelet[2729]: I0214 00:21:59.670555 2729 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 00:21:59.670781 kubelet[2729]: I0214 00:21:59.670590 2729 state_mem.go:35] "Initializing new in-memory state store" Feb 14 00:21:59.671304 kubelet[2729]: I0214 00:21:59.670861 2729 state_mem.go:75] "Updated machine memory state" Feb 14 00:21:59.682188 kubelet[2729]: I0214 00:21:59.681893 2729 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 00:21:59.687705 kubelet[2729]: I0214 00:21:59.684775 2729 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 00:21:59.687809 kubelet[2729]: I0214 00:21:59.687724 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 00:21:59.704534 kubelet[2729]: I0214 00:21:59.703466 2729 topology_manager.go:215] "Topology Admit Handler" podUID="6c883bab5539d68e06502f65b344e057" podNamespace="kube-system" podName="kube-apiserver-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.704534 kubelet[2729]: I0214 00:21:59.703649 2729 topology_manager.go:215] "Topology Admit Handler" podUID="cd437561a914661ab7ee92bd5f49ea11" podNamespace="kube-system" podName="kube-controller-manager-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.704534 kubelet[2729]: I0214 00:21:59.703761 2729 topology_manager.go:215] "Topology Admit Handler" podUID="c007d76a2c9cda8db0f544a5df0dbb8c" podNamespace="kube-system" podName="kube-scheduler-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.707869 sudo[2760]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 14 00:21:59.708464 sudo[2760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 14 00:21:59.709970 kubelet[2729]: I0214 00:21:59.709135 2729 kubelet_node_status.go:73] "Attempting to register node" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.723595 kubelet[2729]: W0214 00:21:59.723122 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:21:59.726652 kubelet[2729]: W0214 00:21:59.726624 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:21:59.728829 kubelet[2729]: W0214 00:21:59.728794 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:21:59.736335 kubelet[2729]: I0214 00:21:59.735391 2729 kubelet_node_status.go:112] "Node was previously registered" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.736335 kubelet[2729]: I0214 00:21:59.735528 2729 kubelet_node_status.go:76] "Successfully registered node" node="srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.792075 kubelet[2729]: I0214 00:21:59.792024 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd437561a914661ab7ee92bd5f49ea11-ca-certs\") pod \"kube-controller-manager-srv-skbpq.gb1.brightbox.com\" (UID: \"cd437561a914661ab7ee92bd5f49ea11\") " pod="kube-system/kube-controller-manager-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.792075 kubelet[2729]: I0214 00:21:59.792080 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cd437561a914661ab7ee92bd5f49ea11-flexvolume-dir\") pod \"kube-controller-manager-srv-skbpq.gb1.brightbox.com\" (UID: \"cd437561a914661ab7ee92bd5f49ea11\") " pod="kube-system/kube-controller-manager-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.792381 kubelet[2729]: I0214 00:21:59.792120 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd437561a914661ab7ee92bd5f49ea11-k8s-certs\") pod \"kube-controller-manager-srv-skbpq.gb1.brightbox.com\" (UID: \"cd437561a914661ab7ee92bd5f49ea11\") " pod="kube-system/kube-controller-manager-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.792381 kubelet[2729]: I0214 00:21:59.792150 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c883bab5539d68e06502f65b344e057-ca-certs\") pod \"kube-apiserver-srv-skbpq.gb1.brightbox.com\" (UID: \"6c883bab5539d68e06502f65b344e057\") " pod="kube-system/kube-apiserver-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.792381 kubelet[2729]: I0214 00:21:59.792179 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c883bab5539d68e06502f65b344e057-usr-share-ca-certificates\") pod \"kube-apiserver-srv-skbpq.gb1.brightbox.com\" (UID: \"6c883bab5539d68e06502f65b344e057\") " pod="kube-system/kube-apiserver-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.792381 kubelet[2729]: I0214 00:21:59.792222 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd437561a914661ab7ee92bd5f49ea11-kubeconfig\") pod \"kube-controller-manager-srv-skbpq.gb1.brightbox.com\" (UID: \"cd437561a914661ab7ee92bd5f49ea11\") " pod="kube-system/kube-controller-manager-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.792381 kubelet[2729]: I0214 00:21:59.792253 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd437561a914661ab7ee92bd5f49ea11-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-skbpq.gb1.brightbox.com\" (UID: \"cd437561a914661ab7ee92bd5f49ea11\") " pod="kube-system/kube-controller-manager-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.792627 kubelet[2729]: I0214 00:21:59.792295 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c007d76a2c9cda8db0f544a5df0dbb8c-kubeconfig\") pod \"kube-scheduler-srv-skbpq.gb1.brightbox.com\" (UID: \"c007d76a2c9cda8db0f544a5df0dbb8c\") " pod="kube-system/kube-scheduler-srv-skbpq.gb1.brightbox.com" Feb 14 00:21:59.792627 kubelet[2729]: I0214 00:21:59.792354 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c883bab5539d68e06502f65b344e057-k8s-certs\") pod \"kube-apiserver-srv-skbpq.gb1.brightbox.com\" (UID: \"6c883bab5539d68e06502f65b344e057\") " pod="kube-system/kube-apiserver-srv-skbpq.gb1.brightbox.com" Feb 14 00:22:00.432408 sudo[2760]: pam_unix(sudo:session): session closed for user root Feb 14 00:22:00.556875 kubelet[2729]: I0214 00:22:00.556053 2729 apiserver.go:52] "Watching apiserver" Feb 14 00:22:00.590325 kubelet[2729]: I0214 00:22:00.590256 2729 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 14 00:22:00.649353 kubelet[2729]: W0214 00:22:00.649288 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:22:00.649583 kubelet[2729]: E0214 00:22:00.649387 2729 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-skbpq.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-skbpq.gb1.brightbox.com" Feb 14 00:22:00.674252 kubelet[2729]: I0214 00:22:00.674150 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-skbpq.gb1.brightbox.com" podStartSLOduration=1.674109499 podStartE2EDuration="1.674109499s" podCreationTimestamp="2025-02-14 00:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:22:00.673007 +0000 UTC m=+1.220850752" watchObservedRunningTime="2025-02-14 00:22:00.674109499 +0000 UTC m=+1.221953238" Feb 14 00:22:00.695156 kubelet[2729]: I0214 00:22:00.694941 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-skbpq.gb1.brightbox.com" podStartSLOduration=1.694915164 podStartE2EDuration="1.694915164s" podCreationTimestamp="2025-02-14 00:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:22:00.688713658 +0000 UTC m=+1.236557422" watchObservedRunningTime="2025-02-14 00:22:00.694915164 +0000 UTC m=+1.242758902" Feb 14 00:22:00.780917 kubelet[2729]: I0214 00:22:00.780822 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-skbpq.gb1.brightbox.com" podStartSLOduration=1.780791447 podStartE2EDuration="1.780791447s" podCreationTimestamp="2025-02-14 00:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:22:00.730458488 +0000 UTC m=+1.278302233" watchObservedRunningTime="2025-02-14 00:22:00.780791447 +0000 UTC m=+1.328635185" Feb 14 00:22:02.323936 sudo[1775]: pam_unix(sudo:session): session closed for user root Feb 14 00:22:02.470510 sshd[1772]: pam_unix(sshd:session): session closed for user core Feb 14 00:22:02.478202 systemd-logind[1482]: Session 11 logged out. Waiting for processes to exit. Feb 14 00:22:02.479658 systemd[1]: sshd@8-10.230.16.158:22-147.75.109.163:52108.service: Deactivated successfully. Feb 14 00:22:02.482425 systemd[1]: session-11.scope: Deactivated successfully. Feb 14 00:22:02.482781 systemd[1]: session-11.scope: Consumed 7.541s CPU time, 187.3M memory peak, 0B memory swap peak. Feb 14 00:22:02.483834 systemd-logind[1482]: Removed session 11. Feb 14 00:22:06.538714 systemd[1]: Started sshd@10-10.230.16.158:22-202.72.235.223:55780.service - OpenSSH per-connection server daemon (202.72.235.223:55780). Feb 14 00:22:07.688009 sshd[2805]: Invalid user spectrum from 202.72.235.223 port 55780 Feb 14 00:22:07.906605 sshd[2805]: Received disconnect from 202.72.235.223 port 55780:11: Bye Bye [preauth] Feb 14 00:22:07.906605 sshd[2805]: Disconnected from invalid user spectrum 202.72.235.223 port 55780 [preauth] Feb 14 00:22:07.909804 systemd[1]: sshd@10-10.230.16.158:22-202.72.235.223:55780.service: Deactivated successfully. Feb 14 00:22:13.688085 kubelet[2729]: I0214 00:22:13.687925 2729 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 14 00:22:13.689555 containerd[1503]: time="2025-02-14T00:22:13.689437024Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 14 00:22:13.690197 kubelet[2729]: I0214 00:22:13.689720 2729 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 14 00:22:14.465823 kubelet[2729]: I0214 00:22:14.464378 2729 topology_manager.go:215] "Topology Admit Handler" podUID="16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1" podNamespace="kube-system" podName="kube-proxy-zcd4q" Feb 14 00:22:14.469283 kubelet[2729]: I0214 00:22:14.468128 2729 topology_manager.go:215] "Topology Admit Handler" podUID="d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" podNamespace="kube-system" podName="cilium-sz7hv" Feb 14 00:22:14.479715 kubelet[2729]: W0214 00:22:14.479677 2729 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-skbpq.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-skbpq.gb1.brightbox.com' and this object Feb 14 00:22:14.479979 kubelet[2729]: E0214 00:22:14.479892 2729 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-skbpq.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-skbpq.gb1.brightbox.com' and this object Feb 14 00:22:14.488585 systemd[1]: Created slice kubepods-besteffort-pod16696e18_1ac7_4e8b_b6d8_f0a6312cd8d1.slice - libcontainer container kubepods-besteffort-pod16696e18_1ac7_4e8b_b6d8_f0a6312cd8d1.slice. Feb 14 00:22:14.495519 kubelet[2729]: I0214 00:22:14.495472 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-lib-modules\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.495649 kubelet[2729]: I0214 00:22:14.495538 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hb48\" (UniqueName: \"kubernetes.io/projected/16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1-kube-api-access-7hb48\") pod \"kube-proxy-zcd4q\" (UID: \"16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1\") " pod="kube-system/kube-proxy-zcd4q" Feb 14 00:22:14.495649 kubelet[2729]: I0214 00:22:14.495574 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cilium-run\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.495649 kubelet[2729]: I0214 00:22:14.495601 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cni-path\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.495649 kubelet[2729]: I0214 00:22:14.495628 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-hostproc\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.495924 kubelet[2729]: I0214 00:22:14.495655 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-hubble-tls\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.495924 kubelet[2729]: I0214 00:22:14.495681 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1-kube-proxy\") pod \"kube-proxy-zcd4q\" (UID: \"16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1\") " pod="kube-system/kube-proxy-zcd4q" Feb 14 00:22:14.495924 kubelet[2729]: I0214 00:22:14.495711 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-etc-cni-netd\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.495924 kubelet[2729]: I0214 00:22:14.495771 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tvhw\" (UniqueName: \"kubernetes.io/projected/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-kube-api-access-4tvhw\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.495924 kubelet[2729]: I0214 00:22:14.495805 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-clustermesh-secrets\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.496309 kubelet[2729]: I0214 00:22:14.495861 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cilium-config-path\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.496309 kubelet[2729]: I0214 00:22:14.495892 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-bpf-maps\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.496309 kubelet[2729]: I0214 00:22:14.495920 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-host-proc-sys-kernel\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.496309 kubelet[2729]: I0214 00:22:14.495957 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cilium-cgroup\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.496309 kubelet[2729]: I0214 00:22:14.495987 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1-xtables-lock\") pod \"kube-proxy-zcd4q\" (UID: \"16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1\") " pod="kube-system/kube-proxy-zcd4q" Feb 14 00:22:14.496309 kubelet[2729]: I0214 00:22:14.496062 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1-lib-modules\") pod \"kube-proxy-zcd4q\" (UID: \"16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1\") " pod="kube-system/kube-proxy-zcd4q" Feb 14 00:22:14.497155 kubelet[2729]: I0214 00:22:14.496117 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-xtables-lock\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.497155 kubelet[2729]: I0214 00:22:14.496207 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-host-proc-sys-net\") pod \"cilium-sz7hv\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " pod="kube-system/cilium-sz7hv" Feb 14 00:22:14.507625 systemd[1]: Created slice kubepods-burstable-podd5be48cb_2d0c_4ac8_8fbe_2270b551cd90.slice - libcontainer container kubepods-burstable-podd5be48cb_2d0c_4ac8_8fbe_2270b551cd90.slice. Feb 14 00:22:14.753514 kubelet[2729]: I0214 00:22:14.751783 2729 topology_manager.go:215] "Topology Admit Handler" podUID="5539248e-3950-44cf-a782-76b5d5c13db3" podNamespace="kube-system" podName="cilium-operator-599987898-qh5c7" Feb 14 00:22:14.769783 systemd[1]: Created slice kubepods-besteffort-pod5539248e_3950_44cf_a782_76b5d5c13db3.slice - libcontainer container kubepods-besteffort-pod5539248e_3950_44cf_a782_76b5d5c13db3.slice. Feb 14 00:22:14.801179 kubelet[2729]: I0214 00:22:14.799074 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5539248e-3950-44cf-a782-76b5d5c13db3-cilium-config-path\") pod \"cilium-operator-599987898-qh5c7\" (UID: \"5539248e-3950-44cf-a782-76b5d5c13db3\") " pod="kube-system/cilium-operator-599987898-qh5c7" Feb 14 00:22:14.801179 kubelet[2729]: I0214 00:22:14.801030 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvx96\" (UniqueName: \"kubernetes.io/projected/5539248e-3950-44cf-a782-76b5d5c13db3-kube-api-access-hvx96\") pod \"cilium-operator-599987898-qh5c7\" (UID: \"5539248e-3950-44cf-a782-76b5d5c13db3\") " pod="kube-system/cilium-operator-599987898-qh5c7" Feb 14 00:22:14.815599 containerd[1503]: time="2025-02-14T00:22:14.815491257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sz7hv,Uid:d5be48cb-2d0c-4ac8-8fbe-2270b551cd90,Namespace:kube-system,Attempt:0,}" Feb 14 00:22:14.882536 containerd[1503]: time="2025-02-14T00:22:14.881747864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:22:14.882536 containerd[1503]: time="2025-02-14T00:22:14.882373658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:22:14.882536 containerd[1503]: time="2025-02-14T00:22:14.882460743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:22:14.884046 containerd[1503]: time="2025-02-14T00:22:14.883963559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:22:14.918581 systemd[1]: Started cri-containerd-56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2.scope - libcontainer container 56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2. Feb 14 00:22:14.967716 containerd[1503]: time="2025-02-14T00:22:14.967662732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sz7hv,Uid:d5be48cb-2d0c-4ac8-8fbe-2270b551cd90,Namespace:kube-system,Attempt:0,} returns sandbox id \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\"" Feb 14 00:22:14.971683 containerd[1503]: time="2025-02-14T00:22:14.971565148Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 14 00:22:15.074484 containerd[1503]: time="2025-02-14T00:22:15.074277309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qh5c7,Uid:5539248e-3950-44cf-a782-76b5d5c13db3,Namespace:kube-system,Attempt:0,}" Feb 14 00:22:15.105978 containerd[1503]: time="2025-02-14T00:22:15.105650858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:22:15.105978 containerd[1503]: time="2025-02-14T00:22:15.105728712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:22:15.105978 containerd[1503]: time="2025-02-14T00:22:15.105746033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:22:15.106478 containerd[1503]: time="2025-02-14T00:22:15.106250006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:22:15.129588 systemd[1]: Started cri-containerd-547e07fb96ca04c924d479e5bb16b7a373f291b80fc21fb16176e1abd50cb985.scope - libcontainer container 547e07fb96ca04c924d479e5bb16b7a373f291b80fc21fb16176e1abd50cb985. Feb 14 00:22:15.191760 containerd[1503]: time="2025-02-14T00:22:15.191709471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qh5c7,Uid:5539248e-3950-44cf-a782-76b5d5c13db3,Namespace:kube-system,Attempt:0,} returns sandbox id \"547e07fb96ca04c924d479e5bb16b7a373f291b80fc21fb16176e1abd50cb985\"" Feb 14 00:22:15.598761 kubelet[2729]: E0214 00:22:15.598244 2729 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 14 00:22:15.598761 kubelet[2729]: E0214 00:22:15.598413 2729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1-kube-proxy podName:16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1 nodeName:}" failed. No retries permitted until 2025-02-14 00:22:16.098364795 +0000 UTC m=+16.646208528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1-kube-proxy") pod "kube-proxy-zcd4q" (UID: "16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1") : failed to sync configmap cache: timed out waiting for the condition Feb 14 00:22:16.303878 containerd[1503]: time="2025-02-14T00:22:16.303809178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zcd4q,Uid:16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1,Namespace:kube-system,Attempt:0,}" Feb 14 00:22:16.339879 containerd[1503]: time="2025-02-14T00:22:16.339658640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:22:16.339879 containerd[1503]: time="2025-02-14T00:22:16.339746657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:22:16.339879 containerd[1503]: time="2025-02-14T00:22:16.339771582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:22:16.341720 containerd[1503]: time="2025-02-14T00:22:16.341491356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:22:16.381934 systemd[1]: Started cri-containerd-c8ce356172210094569559c21b39eb560ab847f3293fc01ff165018ba0c83a34.scope - libcontainer container c8ce356172210094569559c21b39eb560ab847f3293fc01ff165018ba0c83a34. Feb 14 00:22:16.422482 containerd[1503]: time="2025-02-14T00:22:16.422396167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zcd4q,Uid:16696e18-1ac7-4e8b-b6d8-f0a6312cd8d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8ce356172210094569559c21b39eb560ab847f3293fc01ff165018ba0c83a34\"" Feb 14 00:22:16.428431 containerd[1503]: time="2025-02-14T00:22:16.428368124Z" level=info msg="CreateContainer within sandbox \"c8ce356172210094569559c21b39eb560ab847f3293fc01ff165018ba0c83a34\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 14 00:22:16.453993 containerd[1503]: time="2025-02-14T00:22:16.453886239Z" level=info msg="CreateContainer within sandbox \"c8ce356172210094569559c21b39eb560ab847f3293fc01ff165018ba0c83a34\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"780ec3e508f1f52db5aa953cf635059312d0288739fb31e9dc816339a0e18f48\"" Feb 14 00:22:16.457142 containerd[1503]: time="2025-02-14T00:22:16.455593447Z" level=info msg="StartContainer for \"780ec3e508f1f52db5aa953cf635059312d0288739fb31e9dc816339a0e18f48\"" Feb 14 00:22:16.501792 systemd[1]: Started cri-containerd-780ec3e508f1f52db5aa953cf635059312d0288739fb31e9dc816339a0e18f48.scope - libcontainer container 780ec3e508f1f52db5aa953cf635059312d0288739fb31e9dc816339a0e18f48. Feb 14 00:22:16.552625 containerd[1503]: time="2025-02-14T00:22:16.552469460Z" level=info msg="StartContainer for \"780ec3e508f1f52db5aa953cf635059312d0288739fb31e9dc816339a0e18f48\" returns successfully" Feb 14 00:22:16.612757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838140865.mount: Deactivated successfully. Feb 14 00:22:16.699154 kubelet[2729]: I0214 00:22:16.696723 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zcd4q" podStartSLOduration=2.6966953609999997 podStartE2EDuration="2.696695361s" podCreationTimestamp="2025-02-14 00:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:22:16.696666702 +0000 UTC m=+17.244510468" watchObservedRunningTime="2025-02-14 00:22:16.696695361 +0000 UTC m=+17.244539088" Feb 14 00:22:22.461113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount635343644.mount: Deactivated successfully. Feb 14 00:22:25.596600 containerd[1503]: time="2025-02-14T00:22:25.596502780Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:22:25.598753 containerd[1503]: time="2025-02-14T00:22:25.598683026Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 14 00:22:25.599502 containerd[1503]: time="2025-02-14T00:22:25.599190931Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:22:25.615338 containerd[1503]: time="2025-02-14T00:22:25.614769692Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.643153483s" Feb 14 00:22:25.615338 containerd[1503]: time="2025-02-14T00:22:25.614831061Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 14 00:22:25.620835 containerd[1503]: time="2025-02-14T00:22:25.620535752Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 14 00:22:25.626446 containerd[1503]: time="2025-02-14T00:22:25.626063310Z" level=info msg="CreateContainer within sandbox \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 14 00:22:25.709984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1423561632.mount: Deactivated successfully. Feb 14 00:22:25.714046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104107488.mount: Deactivated successfully. Feb 14 00:22:25.719931 containerd[1503]: time="2025-02-14T00:22:25.719842553Z" level=info msg="CreateContainer within sandbox \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48\"" Feb 14 00:22:25.720911 containerd[1503]: time="2025-02-14T00:22:25.720745890Z" level=info msg="StartContainer for \"5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48\"" Feb 14 00:22:25.957655 systemd[1]: Started cri-containerd-5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48.scope - libcontainer container 5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48. Feb 14 00:22:26.009951 containerd[1503]: time="2025-02-14T00:22:26.009631523Z" level=info msg="StartContainer for \"5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48\" returns successfully" Feb 14 00:22:26.032066 systemd[1]: cri-containerd-5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48.scope: Deactivated successfully. Feb 14 00:22:26.375714 containerd[1503]: time="2025-02-14T00:22:26.366482946Z" level=info msg="shim disconnected" id=5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48 namespace=k8s.io Feb 14 00:22:26.381931 containerd[1503]: time="2025-02-14T00:22:26.375972691Z" level=warning msg="cleaning up after shim disconnected" id=5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48 namespace=k8s.io Feb 14 00:22:26.381931 containerd[1503]: time="2025-02-14T00:22:26.376009321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:22:26.706458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48-rootfs.mount: Deactivated successfully. Feb 14 00:22:26.731050 containerd[1503]: time="2025-02-14T00:22:26.730800579Z" level=info msg="CreateContainer within sandbox \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 14 00:22:26.788152 containerd[1503]: time="2025-02-14T00:22:26.788068688Z" level=info msg="CreateContainer within sandbox \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf\"" Feb 14 00:22:26.790420 containerd[1503]: time="2025-02-14T00:22:26.789404218Z" level=info msg="StartContainer for \"898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf\"" Feb 14 00:22:26.835574 systemd[1]: Started cri-containerd-898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf.scope - libcontainer container 898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf. Feb 14 00:22:26.880489 containerd[1503]: time="2025-02-14T00:22:26.879802799Z" level=info msg="StartContainer for \"898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf\" returns successfully" Feb 14 00:22:26.900739 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 14 00:22:26.901375 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 14 00:22:26.901525 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 14 00:22:26.910783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 00:22:26.911146 systemd[1]: cri-containerd-898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf.scope: Deactivated successfully. Feb 14 00:22:26.963910 containerd[1503]: time="2025-02-14T00:22:26.961582650Z" level=info msg="shim disconnected" id=898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf namespace=k8s.io Feb 14 00:22:26.963910 containerd[1503]: time="2025-02-14T00:22:26.961668547Z" level=warning msg="cleaning up after shim disconnected" id=898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf namespace=k8s.io Feb 14 00:22:26.963910 containerd[1503]: time="2025-02-14T00:22:26.961741301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:22:26.976438 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 00:22:27.709156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf-rootfs.mount: Deactivated successfully. Feb 14 00:22:27.737390 containerd[1503]: time="2025-02-14T00:22:27.737307992Z" level=info msg="CreateContainer within sandbox \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 14 00:22:27.795727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1943351200.mount: Deactivated successfully. Feb 14 00:22:27.805823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977649414.mount: Deactivated successfully. Feb 14 00:22:27.816456 containerd[1503]: time="2025-02-14T00:22:27.816400919Z" level=info msg="CreateContainer within sandbox \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2\"" Feb 14 00:22:27.831466 containerd[1503]: time="2025-02-14T00:22:27.819401422Z" level=info msg="StartContainer for \"5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2\"" Feb 14 00:22:27.909762 systemd[1]: Started cri-containerd-5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2.scope - libcontainer container 5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2. Feb 14 00:22:28.003103 containerd[1503]: time="2025-02-14T00:22:28.002618353Z" level=info msg="StartContainer for \"5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2\" returns successfully" Feb 14 00:22:28.004215 systemd[1]: cri-containerd-5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2.scope: Deactivated successfully. Feb 14 00:22:28.071905 containerd[1503]: time="2025-02-14T00:22:28.071790737Z" level=info msg="shim disconnected" id=5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2 namespace=k8s.io Feb 14 00:22:28.071905 containerd[1503]: time="2025-02-14T00:22:28.071892131Z" level=warning msg="cleaning up after shim disconnected" id=5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2 namespace=k8s.io Feb 14 00:22:28.071905 containerd[1503]: time="2025-02-14T00:22:28.071909783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:22:28.538384 containerd[1503]: time="2025-02-14T00:22:28.538258168Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:22:28.539620 containerd[1503]: time="2025-02-14T00:22:28.539571800Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 14 00:22:28.540258 containerd[1503]: time="2025-02-14T00:22:28.539977227Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:22:28.542417 containerd[1503]: time="2025-02-14T00:22:28.542194079Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.921607525s" Feb 14 00:22:28.542417 containerd[1503]: time="2025-02-14T00:22:28.542246881Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 14 00:22:28.585225 containerd[1503]: time="2025-02-14T00:22:28.585157279Z" level=info msg="CreateContainer within sandbox \"547e07fb96ca04c924d479e5bb16b7a373f291b80fc21fb16176e1abd50cb985\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 14 00:22:28.604978 containerd[1503]: time="2025-02-14T00:22:28.604931450Z" level=info msg="CreateContainer within sandbox \"547e07fb96ca04c924d479e5bb16b7a373f291b80fc21fb16176e1abd50cb985\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\"" Feb 14 00:22:28.607368 containerd[1503]: time="2025-02-14T00:22:28.606537182Z" level=info msg="StartContainer for \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\"" Feb 14 00:22:28.644564 systemd[1]: Started cri-containerd-f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164.scope - libcontainer container f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164. Feb 14 00:22:28.694272 containerd[1503]: time="2025-02-14T00:22:28.694108723Z" level=info msg="StartContainer for \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\" returns successfully" Feb 14 00:22:28.759835 containerd[1503]: time="2025-02-14T00:22:28.759758730Z" level=info msg="CreateContainer within sandbox \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 14 00:22:28.807151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3902503233.mount: Deactivated successfully. Feb 14 00:22:28.823404 containerd[1503]: time="2025-02-14T00:22:28.823159711Z" level=info msg="CreateContainer within sandbox \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50\"" Feb 14 00:22:28.828252 containerd[1503]: time="2025-02-14T00:22:28.825927183Z" level=info msg="StartContainer for \"30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50\"" Feb 14 00:22:28.897552 systemd[1]: Started cri-containerd-30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50.scope - libcontainer container 30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50. Feb 14 00:22:28.956852 containerd[1503]: time="2025-02-14T00:22:28.956778835Z" level=info msg="StartContainer for \"30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50\" returns successfully" Feb 14 00:22:28.962488 systemd[1]: cri-containerd-30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50.scope: Deactivated successfully. Feb 14 00:22:29.118870 containerd[1503]: time="2025-02-14T00:22:29.118563221Z" level=info msg="shim disconnected" id=30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50 namespace=k8s.io Feb 14 00:22:29.118870 containerd[1503]: time="2025-02-14T00:22:29.118700713Z" level=warning msg="cleaning up after shim disconnected" id=30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50 namespace=k8s.io Feb 14 00:22:29.118870 containerd[1503]: time="2025-02-14T00:22:29.118720550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:22:29.708510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50-rootfs.mount: Deactivated successfully. Feb 14 00:22:29.778963 containerd[1503]: time="2025-02-14T00:22:29.778893157Z" level=info msg="CreateContainer within sandbox \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 14 00:22:29.813229 containerd[1503]: time="2025-02-14T00:22:29.813159536Z" level=info msg="CreateContainer within sandbox \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\"" Feb 14 00:22:29.814228 containerd[1503]: time="2025-02-14T00:22:29.814057178Z" level=info msg="StartContainer for \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\"" Feb 14 00:22:29.919654 systemd[1]: run-containerd-runc-k8s.io-8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752-runc.SEMG3K.mount: Deactivated successfully. Feb 14 00:22:29.936594 systemd[1]: Started cri-containerd-8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752.scope - libcontainer container 8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752. Feb 14 00:22:30.065770 containerd[1503]: time="2025-02-14T00:22:30.065667925Z" level=info msg="StartContainer for \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\" returns successfully" Feb 14 00:22:30.156379 kubelet[2729]: I0214 00:22:30.154925 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-qh5c7" podStartSLOduration=2.804853166 podStartE2EDuration="16.154872147s" podCreationTimestamp="2025-02-14 00:22:14 +0000 UTC" firstStartedPulling="2025-02-14 00:22:15.193796774 +0000 UTC m=+15.741640500" lastFinishedPulling="2025-02-14 00:22:28.543815749 +0000 UTC m=+29.091659481" observedRunningTime="2025-02-14 00:22:28.839801385 +0000 UTC m=+29.387645137" watchObservedRunningTime="2025-02-14 00:22:30.154872147 +0000 UTC m=+30.702715886" Feb 14 00:22:30.442428 kubelet[2729]: I0214 00:22:30.439998 2729 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 14 00:22:30.485987 kubelet[2729]: I0214 00:22:30.485887 2729 topology_manager.go:215] "Topology Admit Handler" podUID="383d8cb5-0353-4ad0-bb74-ecdde43d41d7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9xnfr" Feb 14 00:22:30.488508 kubelet[2729]: I0214 00:22:30.488474 2729 topology_manager.go:215] "Topology Admit Handler" podUID="b3a02426-0446-43d3-81ca-545e6dc8436f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-46vxq" Feb 14 00:22:30.506771 systemd[1]: Created slice kubepods-burstable-pod383d8cb5_0353_4ad0_bb74_ecdde43d41d7.slice - libcontainer container kubepods-burstable-pod383d8cb5_0353_4ad0_bb74_ecdde43d41d7.slice. Feb 14 00:22:30.522747 systemd[1]: Created slice kubepods-burstable-podb3a02426_0446_43d3_81ca_545e6dc8436f.slice - libcontainer container kubepods-burstable-podb3a02426_0446_43d3_81ca_545e6dc8436f.slice. Feb 14 00:22:30.565477 kubelet[2729]: I0214 00:22:30.565049 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3a02426-0446-43d3-81ca-545e6dc8436f-config-volume\") pod \"coredns-7db6d8ff4d-46vxq\" (UID: \"b3a02426-0446-43d3-81ca-545e6dc8436f\") " pod="kube-system/coredns-7db6d8ff4d-46vxq" Feb 14 00:22:30.565477 kubelet[2729]: I0214 00:22:30.565117 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/383d8cb5-0353-4ad0-bb74-ecdde43d41d7-config-volume\") pod \"coredns-7db6d8ff4d-9xnfr\" (UID: \"383d8cb5-0353-4ad0-bb74-ecdde43d41d7\") " pod="kube-system/coredns-7db6d8ff4d-9xnfr" Feb 14 00:22:30.565477 kubelet[2729]: I0214 00:22:30.565154 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g297k\" (UniqueName: \"kubernetes.io/projected/383d8cb5-0353-4ad0-bb74-ecdde43d41d7-kube-api-access-g297k\") pod \"coredns-7db6d8ff4d-9xnfr\" (UID: \"383d8cb5-0353-4ad0-bb74-ecdde43d41d7\") " pod="kube-system/coredns-7db6d8ff4d-9xnfr" Feb 14 00:22:30.565477 kubelet[2729]: I0214 00:22:30.565187 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-457p6\" (UniqueName: \"kubernetes.io/projected/b3a02426-0446-43d3-81ca-545e6dc8436f-kube-api-access-457p6\") pod \"coredns-7db6d8ff4d-46vxq\" (UID: \"b3a02426-0446-43d3-81ca-545e6dc8436f\") " pod="kube-system/coredns-7db6d8ff4d-46vxq" Feb 14 00:22:30.818377 containerd[1503]: time="2025-02-14T00:22:30.817734809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9xnfr,Uid:383d8cb5-0353-4ad0-bb74-ecdde43d41d7,Namespace:kube-system,Attempt:0,}" Feb 14 00:22:30.829642 containerd[1503]: time="2025-02-14T00:22:30.829593793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-46vxq,Uid:b3a02426-0446-43d3-81ca-545e6dc8436f,Namespace:kube-system,Attempt:0,}" Feb 14 00:22:33.086083 systemd-networkd[1430]: cilium_host: Link UP Feb 14 00:22:33.087101 systemd-networkd[1430]: cilium_net: Link UP Feb 14 00:22:33.092826 systemd-networkd[1430]: cilium_net: Gained carrier Feb 14 00:22:33.093244 systemd-networkd[1430]: cilium_host: Gained carrier Feb 14 00:22:33.270312 systemd-networkd[1430]: cilium_vxlan: Link UP Feb 14 00:22:33.270328 systemd-networkd[1430]: cilium_vxlan: Gained carrier Feb 14 00:22:33.341561 systemd-networkd[1430]: cilium_host: Gained IPv6LL Feb 14 00:22:33.853438 kernel: NET: Registered PF_ALG protocol family Feb 14 00:22:33.917650 systemd-networkd[1430]: cilium_net: Gained IPv6LL Feb 14 00:22:34.997116 systemd-networkd[1430]: lxc_health: Link UP Feb 14 00:22:35.011601 systemd-networkd[1430]: lxc_health: Gained carrier Feb 14 00:22:35.197780 systemd-networkd[1430]: cilium_vxlan: Gained IPv6LL Feb 14 00:22:35.485273 systemd-networkd[1430]: lxc7bca13d3b360: Link UP Feb 14 00:22:35.489310 systemd-networkd[1430]: lxcbaddcaacdd85: Link UP Feb 14 00:22:35.503980 kernel: eth0: renamed from tmp00dd2 Feb 14 00:22:35.517495 kernel: eth0: renamed from tmp376ec Feb 14 00:22:35.526728 systemd-networkd[1430]: lxc7bca13d3b360: Gained carrier Feb 14 00:22:35.528637 systemd-networkd[1430]: lxcbaddcaacdd85: Gained carrier Feb 14 00:22:36.477623 systemd-networkd[1430]: lxc_health: Gained IPv6LL Feb 14 00:22:36.733636 systemd-networkd[1430]: lxcbaddcaacdd85: Gained IPv6LL Feb 14 00:22:36.849357 kubelet[2729]: I0214 00:22:36.848141 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sz7hv" podStartSLOduration=12.198797221 podStartE2EDuration="22.84803271s" podCreationTimestamp="2025-02-14 00:22:14 +0000 UTC" firstStartedPulling="2025-02-14 00:22:14.971051819 +0000 UTC m=+15.518895551" lastFinishedPulling="2025-02-14 00:22:25.620287302 +0000 UTC m=+26.168131040" observedRunningTime="2025-02-14 00:22:30.807285964 +0000 UTC m=+31.355129703" watchObservedRunningTime="2025-02-14 00:22:36.84803271 +0000 UTC m=+37.395876449" Feb 14 00:22:37.245630 systemd-networkd[1430]: lxc7bca13d3b360: Gained IPv6LL Feb 14 00:22:41.438388 containerd[1503]: time="2025-02-14T00:22:41.437991757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:22:41.443822 containerd[1503]: time="2025-02-14T00:22:41.441843813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:22:41.443822 containerd[1503]: time="2025-02-14T00:22:41.441890165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:22:41.443822 containerd[1503]: time="2025-02-14T00:22:41.442074406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:22:41.466184 containerd[1503]: time="2025-02-14T00:22:41.466015534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:22:41.467286 containerd[1503]: time="2025-02-14T00:22:41.466712185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:22:41.467286 containerd[1503]: time="2025-02-14T00:22:41.466741635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:22:41.467286 containerd[1503]: time="2025-02-14T00:22:41.466945556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:22:41.493609 systemd[1]: run-containerd-runc-k8s.io-376ecd9beaa7ceb36efa231198fb3882252a3458e10c46d675eb09615188d7ff-runc.2v2GCi.mount: Deactivated successfully. Feb 14 00:22:41.515562 systemd[1]: Started cri-containerd-376ecd9beaa7ceb36efa231198fb3882252a3458e10c46d675eb09615188d7ff.scope - libcontainer container 376ecd9beaa7ceb36efa231198fb3882252a3458e10c46d675eb09615188d7ff. Feb 14 00:22:41.551556 systemd[1]: Started cri-containerd-00dd29b28d6af20162619c6c7d1668221f1052bf00ff5f421fe212bab004c203.scope - libcontainer container 00dd29b28d6af20162619c6c7d1668221f1052bf00ff5f421fe212bab004c203. Feb 14 00:22:41.653957 containerd[1503]: time="2025-02-14T00:22:41.652934658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9xnfr,Uid:383d8cb5-0353-4ad0-bb74-ecdde43d41d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"376ecd9beaa7ceb36efa231198fb3882252a3458e10c46d675eb09615188d7ff\"" Feb 14 00:22:41.676938 containerd[1503]: time="2025-02-14T00:22:41.676737259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-46vxq,Uid:b3a02426-0446-43d3-81ca-545e6dc8436f,Namespace:kube-system,Attempt:0,} returns sandbox id \"00dd29b28d6af20162619c6c7d1668221f1052bf00ff5f421fe212bab004c203\"" Feb 14 00:22:41.688222 containerd[1503]: time="2025-02-14T00:22:41.687970768Z" level=info msg="CreateContainer within sandbox \"00dd29b28d6af20162619c6c7d1668221f1052bf00ff5f421fe212bab004c203\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 00:22:41.693858 containerd[1503]: time="2025-02-14T00:22:41.693540605Z" level=info msg="CreateContainer within sandbox \"376ecd9beaa7ceb36efa231198fb3882252a3458e10c46d675eb09615188d7ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 00:22:41.730820 containerd[1503]: time="2025-02-14T00:22:41.730340211Z" level=info msg="CreateContainer within sandbox \"376ecd9beaa7ceb36efa231198fb3882252a3458e10c46d675eb09615188d7ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"acdae1a84d18c001b0e05832f069264bc832e3c7c2a2cac5269d2640c520a17c\"" Feb 14 00:22:41.733386 containerd[1503]: time="2025-02-14T00:22:41.732166935Z" level=info msg="StartContainer for \"acdae1a84d18c001b0e05832f069264bc832e3c7c2a2cac5269d2640c520a17c\"" Feb 14 00:22:41.739624 containerd[1503]: time="2025-02-14T00:22:41.739324078Z" level=info msg="CreateContainer within sandbox \"00dd29b28d6af20162619c6c7d1668221f1052bf00ff5f421fe212bab004c203\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"475294c67158f99b7f89a8cbb1a87235d5d17326f34b4d840884d225a9ba683a\"" Feb 14 00:22:41.742773 containerd[1503]: time="2025-02-14T00:22:41.742730946Z" level=info msg="StartContainer for \"475294c67158f99b7f89a8cbb1a87235d5d17326f34b4d840884d225a9ba683a\"" Feb 14 00:22:41.783085 systemd[1]: Started cri-containerd-acdae1a84d18c001b0e05832f069264bc832e3c7c2a2cac5269d2640c520a17c.scope - libcontainer container acdae1a84d18c001b0e05832f069264bc832e3c7c2a2cac5269d2640c520a17c. Feb 14 00:22:41.821647 systemd[1]: Started cri-containerd-475294c67158f99b7f89a8cbb1a87235d5d17326f34b4d840884d225a9ba683a.scope - libcontainer container 475294c67158f99b7f89a8cbb1a87235d5d17326f34b4d840884d225a9ba683a. Feb 14 00:22:41.883767 containerd[1503]: time="2025-02-14T00:22:41.883707843Z" level=info msg="StartContainer for \"acdae1a84d18c001b0e05832f069264bc832e3c7c2a2cac5269d2640c520a17c\" returns successfully" Feb 14 00:22:41.898258 containerd[1503]: time="2025-02-14T00:22:41.898084971Z" level=info msg="StartContainer for \"475294c67158f99b7f89a8cbb1a87235d5d17326f34b4d840884d225a9ba683a\" returns successfully" Feb 14 00:22:42.449878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount813973874.mount: Deactivated successfully. Feb 14 00:22:42.866130 kubelet[2729]: I0214 00:22:42.865025 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-46vxq" podStartSLOduration=28.864998742 podStartE2EDuration="28.864998742s" podCreationTimestamp="2025-02-14 00:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:22:42.864583005 +0000 UTC m=+43.412426752" watchObservedRunningTime="2025-02-14 00:22:42.864998742 +0000 UTC m=+43.412842482" Feb 14 00:22:42.881451 kubelet[2729]: I0214 00:22:42.881375 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9xnfr" podStartSLOduration=28.881329335 podStartE2EDuration="28.881329335s" podCreationTimestamp="2025-02-14 00:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:22:42.881045075 +0000 UTC m=+43.428888824" watchObservedRunningTime="2025-02-14 00:22:42.881329335 +0000 UTC m=+43.429173075" Feb 14 00:22:48.329903 systemd[1]: Started sshd@11-10.230.16.158:22-202.72.235.223:39552.service - OpenSSH per-connection server daemon (202.72.235.223:39552). Feb 14 00:22:49.503428 sshd[4101]: Invalid user toto from 202.72.235.223 port 39552 Feb 14 00:22:49.727392 sshd[4101]: Received disconnect from 202.72.235.223 port 39552:11: Bye Bye [preauth] Feb 14 00:22:49.727392 sshd[4101]: Disconnected from invalid user toto 202.72.235.223 port 39552 [preauth] Feb 14 00:22:49.729571 systemd[1]: sshd@11-10.230.16.158:22-202.72.235.223:39552.service: Deactivated successfully. Feb 14 00:23:04.612381 systemd[1]: Started sshd@12-10.230.16.158:22-147.75.109.163:43256.service - OpenSSH per-connection server daemon (147.75.109.163:43256). Feb 14 00:23:05.512383 sshd[4110]: Accepted publickey for core from 147.75.109.163 port 43256 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:23:05.517842 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:23:05.526271 systemd-logind[1482]: New session 12 of user core. Feb 14 00:23:05.531597 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 14 00:23:06.667842 sshd[4110]: pam_unix(sshd:session): session closed for user core Feb 14 00:23:06.673803 systemd[1]: sshd@12-10.230.16.158:22-147.75.109.163:43256.service: Deactivated successfully. Feb 14 00:23:06.676614 systemd[1]: session-12.scope: Deactivated successfully. Feb 14 00:23:06.678150 systemd-logind[1482]: Session 12 logged out. Waiting for processes to exit. Feb 14 00:23:06.680013 systemd-logind[1482]: Removed session 12. Feb 14 00:23:11.836740 systemd[1]: Started sshd@13-10.230.16.158:22-147.75.109.163:59136.service - OpenSSH per-connection server daemon (147.75.109.163:59136). Feb 14 00:23:12.746628 sshd[4124]: Accepted publickey for core from 147.75.109.163 port 59136 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:23:12.751094 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:23:12.758800 systemd-logind[1482]: New session 13 of user core. Feb 14 00:23:12.766574 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 14 00:23:13.488161 sshd[4124]: pam_unix(sshd:session): session closed for user core Feb 14 00:23:13.493506 systemd-logind[1482]: Session 13 logged out. Waiting for processes to exit. Feb 14 00:23:13.493950 systemd[1]: sshd@13-10.230.16.158:22-147.75.109.163:59136.service: Deactivated successfully. Feb 14 00:23:13.496926 systemd[1]: session-13.scope: Deactivated successfully. Feb 14 00:23:13.499758 systemd-logind[1482]: Removed session 13. Feb 14 00:23:18.646846 systemd[1]: Started sshd@14-10.230.16.158:22-147.75.109.163:59148.service - OpenSSH per-connection server daemon (147.75.109.163:59148). Feb 14 00:23:19.563546 sshd[4139]: Accepted publickey for core from 147.75.109.163 port 59148 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:23:19.565763 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:23:19.572940 systemd-logind[1482]: New session 14 of user core. Feb 14 00:23:19.579570 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 14 00:23:20.290493 sshd[4139]: pam_unix(sshd:session): session closed for user core Feb 14 00:23:20.294744 systemd-logind[1482]: Session 14 logged out. Waiting for processes to exit. Feb 14 00:23:20.295056 systemd[1]: sshd@14-10.230.16.158:22-147.75.109.163:59148.service: Deactivated successfully. Feb 14 00:23:20.298967 systemd[1]: session-14.scope: Deactivated successfully. Feb 14 00:23:20.301441 systemd-logind[1482]: Removed session 14. Feb 14 00:23:25.453729 systemd[1]: Started sshd@15-10.230.16.158:22-147.75.109.163:56424.service - OpenSSH per-connection server daemon (147.75.109.163:56424). Feb 14 00:23:26.371458 sshd[4153]: Accepted publickey for core from 147.75.109.163 port 56424 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:23:26.376668 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:23:26.384374 systemd-logind[1482]: New session 15 of user core. Feb 14 00:23:26.389577 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 14 00:23:27.093914 sshd[4153]: pam_unix(sshd:session): session closed for user core Feb 14 00:23:27.099551 systemd[1]: sshd@15-10.230.16.158:22-147.75.109.163:56424.service: Deactivated successfully. Feb 14 00:23:27.102016 systemd[1]: session-15.scope: Deactivated successfully. Feb 14 00:23:27.103156 systemd-logind[1482]: Session 15 logged out. Waiting for processes to exit. Feb 14 00:23:27.106106 systemd-logind[1482]: Removed session 15. Feb 14 00:23:27.260986 systemd[1]: Started sshd@16-10.230.16.158:22-147.75.109.163:56438.service - OpenSSH per-connection server daemon (147.75.109.163:56438). Feb 14 00:23:28.149759 sshd[4167]: Accepted publickey for core from 147.75.109.163 port 56438 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:23:28.152336 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:23:28.159632 systemd-logind[1482]: New session 16 of user core. Feb 14 00:23:28.171593 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 14 00:23:28.949726 sshd[4167]: pam_unix(sshd:session): session closed for user core Feb 14 00:23:28.955434 systemd[1]: sshd@16-10.230.16.158:22-147.75.109.163:56438.service: Deactivated successfully. Feb 14 00:23:28.958599 systemd[1]: session-16.scope: Deactivated successfully. Feb 14 00:23:28.960434 systemd-logind[1482]: Session 16 logged out. Waiting for processes to exit. Feb 14 00:23:28.962192 systemd-logind[1482]: Removed session 16. Feb 14 00:23:29.115430 systemd[1]: Started sshd@17-10.230.16.158:22-147.75.109.163:56448.service - OpenSSH per-connection server daemon (147.75.109.163:56448). Feb 14 00:23:30.014197 sshd[4178]: Accepted publickey for core from 147.75.109.163 port 56448 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:23:30.016362 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:23:30.024684 systemd-logind[1482]: New session 17 of user core. Feb 14 00:23:30.030074 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 14 00:23:30.730619 systemd[1]: Started sshd@18-10.230.16.158:22-202.72.235.223:51548.service - OpenSSH per-connection server daemon (202.72.235.223:51548). Feb 14 00:23:30.740170 sshd[4178]: pam_unix(sshd:session): session closed for user core Feb 14 00:23:30.745656 systemd[1]: sshd@17-10.230.16.158:22-147.75.109.163:56448.service: Deactivated successfully. Feb 14 00:23:30.749935 systemd[1]: session-17.scope: Deactivated successfully. Feb 14 00:23:30.751566 systemd-logind[1482]: Session 17 logged out. Waiting for processes to exit. Feb 14 00:23:30.753985 systemd-logind[1482]: Removed session 17. Feb 14 00:23:31.871840 sshd[4188]: Invalid user raju from 202.72.235.223 port 51548 Feb 14 00:23:32.086918 sshd[4188]: Received disconnect from 202.72.235.223 port 51548:11: Bye Bye [preauth] Feb 14 00:23:32.086918 sshd[4188]: Disconnected from invalid user raju 202.72.235.223 port 51548 [preauth] Feb 14 00:23:32.088695 systemd[1]: sshd@18-10.230.16.158:22-202.72.235.223:51548.service: Deactivated successfully. Feb 14 00:23:35.896857 systemd[1]: Started sshd@19-10.230.16.158:22-147.75.109.163:52914.service - OpenSSH per-connection server daemon (147.75.109.163:52914). Feb 14 00:23:36.790995 sshd[4195]: Accepted publickey for core from 147.75.109.163 port 52914 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:23:36.793147 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:23:36.800596 systemd-logind[1482]: New session 18 of user core. Feb 14 00:23:36.805571 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 14 00:23:37.512293 sshd[4195]: pam_unix(sshd:session): session closed for user core Feb 14 00:23:37.517942 systemd[1]: sshd@19-10.230.16.158:22-147.75.109.163:52914.service: Deactivated successfully. Feb 14 00:23:37.520422 systemd[1]: session-18.scope: Deactivated successfully. Feb 14 00:23:37.521727 systemd-logind[1482]: Session 18 logged out. Waiting for processes to exit. Feb 14 00:23:37.523668 systemd-logind[1482]: Removed session 18. Feb 14 00:23:42.675756 systemd[1]: Started sshd@20-10.230.16.158:22-147.75.109.163:40182.service - OpenSSH per-connection server daemon (147.75.109.163:40182). Feb 14 00:23:43.584256 sshd[4209]: Accepted publickey for core from 147.75.109.163 port 40182 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:23:43.586661 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:23:43.593484 systemd-logind[1482]: New session 19 of user core. Feb 14 00:23:43.599687 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 14 00:23:44.297756 sshd[4209]: pam_unix(sshd:session): session closed for user core Feb 14 00:23:44.303216 systemd[1]: sshd@20-10.230.16.158:22-147.75.109.163:40182.service: Deactivated successfully. Feb 14 00:23:44.305816 systemd[1]: session-19.scope: Deactivated successfully. Feb 14 00:23:44.307400 systemd-logind[1482]: Session 19 logged out. Waiting for processes to exit. Feb 14 00:23:44.309268 systemd-logind[1482]: Removed session 19. Feb 14 00:23:44.454725 systemd[1]: Started sshd@21-10.230.16.158:22-147.75.109.163:40196.service - OpenSSH per-connection server daemon (147.75.109.163:40196). Feb 14 00:23:45.362998 sshd[4222]: Accepted publickey for core from 147.75.109.163 port 40196 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:23:45.365245 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:23:45.372996 systemd-logind[1482]: New session 20 of user core. Feb 14 00:23:45.379563 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 14 00:23:46.461447 sshd[4222]: pam_unix(sshd:session): session closed for user core Feb 14 00:23:46.471613 systemd[1]: sshd@21-10.230.16.158:22-147.75.109.163:40196.service: Deactivated successfully. Feb 14 00:23:46.475096 systemd[1]: session-20.scope: Deactivated successfully. Feb 14 00:23:46.477988 systemd-logind[1482]: Session 20 logged out. Waiting for processes to exit. Feb 14 00:23:46.480106 systemd-logind[1482]: Removed session 20. Feb 14 00:23:46.618725 systemd[1]: Started sshd@22-10.230.16.158:22-147.75.109.163:40200.service - OpenSSH per-connection server daemon (147.75.109.163:40200). Feb 14 00:23:47.521564 sshd[4233]: Accepted publickey for core from 147.75.109.163 port 40200 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:23:47.523903 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:23:47.531072 systemd-logind[1482]: New session 21 of user core. Feb 14 00:23:47.538629 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 14 00:23:50.508805 sshd[4233]: pam_unix(sshd:session): session closed for user core Feb 14 00:23:50.514699 systemd-logind[1482]: Session 21 logged out. Waiting for processes to exit. Feb 14 00:23:50.515900 systemd[1]: sshd@22-10.230.16.158:22-147.75.109.163:40200.service: Deactivated successfully. Feb 14 00:23:50.518162 systemd[1]: session-21.scope: Deactivated successfully. Feb 14 00:23:50.519967 systemd-logind[1482]: Removed session 21. Feb 14 00:23:50.667733 systemd[1]: Started sshd@23-10.230.16.158:22-147.75.109.163:33774.service - OpenSSH per-connection server daemon (147.75.109.163:33774). Feb 14 00:23:51.568702 sshd[4254]: Accepted publickey for core from 147.75.109.163 port 33774 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:23:51.571081 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:23:51.577621 systemd-logind[1482]: New session 22 of user core. Feb 14 00:23:51.587641 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 14 00:23:52.574642 sshd[4254]: pam_unix(sshd:session): session closed for user core Feb 14 00:23:52.579451 systemd-logind[1482]: Session 22 logged out. Waiting for processes to exit. Feb 14 00:23:52.580410 systemd[1]: sshd@23-10.230.16.158:22-147.75.109.163:33774.service: Deactivated successfully. Feb 14 00:23:52.583116 systemd[1]: session-22.scope: Deactivated successfully. Feb 14 00:23:52.586059 systemd-logind[1482]: Removed session 22. Feb 14 00:23:52.733716 systemd[1]: Started sshd@24-10.230.16.158:22-147.75.109.163:33790.service - OpenSSH per-connection server daemon (147.75.109.163:33790). Feb 14 00:23:53.627173 sshd[4267]: Accepted publickey for core from 147.75.109.163 port 33790 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:23:53.629490 sshd[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:23:53.636867 systemd-logind[1482]: New session 23 of user core. Feb 14 00:23:53.646762 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 14 00:23:54.334506 sshd[4267]: pam_unix(sshd:session): session closed for user core Feb 14 00:23:54.338603 systemd[1]: sshd@24-10.230.16.158:22-147.75.109.163:33790.service: Deactivated successfully. Feb 14 00:23:54.341408 systemd[1]: session-23.scope: Deactivated successfully. Feb 14 00:23:54.343222 systemd-logind[1482]: Session 23 logged out. Waiting for processes to exit. Feb 14 00:23:54.345198 systemd-logind[1482]: Removed session 23. Feb 14 00:23:59.503825 systemd[1]: Started sshd@25-10.230.16.158:22-147.75.109.163:56980.service - OpenSSH per-connection server daemon (147.75.109.163:56980). Feb 14 00:24:00.408126 sshd[4283]: Accepted publickey for core from 147.75.109.163 port 56980 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:24:00.410383 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:24:00.417580 systemd-logind[1482]: New session 24 of user core. Feb 14 00:24:00.425596 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 14 00:24:01.113410 sshd[4283]: pam_unix(sshd:session): session closed for user core Feb 14 00:24:01.118166 systemd-logind[1482]: Session 24 logged out. Waiting for processes to exit. Feb 14 00:24:01.118776 systemd[1]: sshd@25-10.230.16.158:22-147.75.109.163:56980.service: Deactivated successfully. Feb 14 00:24:01.121640 systemd[1]: session-24.scope: Deactivated successfully. Feb 14 00:24:01.125132 systemd-logind[1482]: Removed session 24. Feb 14 00:24:06.277876 systemd[1]: Started sshd@26-10.230.16.158:22-147.75.109.163:56986.service - OpenSSH per-connection server daemon (147.75.109.163:56986). Feb 14 00:24:07.163813 sshd[4297]: Accepted publickey for core from 147.75.109.163 port 56986 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:24:07.166071 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:24:07.173487 systemd-logind[1482]: New session 25 of user core. Feb 14 00:24:07.180584 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 14 00:24:07.865235 sshd[4297]: pam_unix(sshd:session): session closed for user core Feb 14 00:24:07.869292 systemd[1]: sshd@26-10.230.16.158:22-147.75.109.163:56986.service: Deactivated successfully. Feb 14 00:24:07.872291 systemd[1]: session-25.scope: Deactivated successfully. Feb 14 00:24:07.875180 systemd-logind[1482]: Session 25 logged out. Waiting for processes to exit. Feb 14 00:24:07.877096 systemd-logind[1482]: Removed session 25. Feb 14 00:24:12.481736 systemd[1]: Started sshd@27-10.230.16.158:22-202.72.235.223:35316.service - OpenSSH per-connection server daemon (202.72.235.223:35316). Feb 14 00:24:13.026975 systemd[1]: Started sshd@28-10.230.16.158:22-147.75.109.163:35418.service - OpenSSH per-connection server daemon (147.75.109.163:35418). Feb 14 00:24:13.641072 sshd[4310]: Invalid user casino from 202.72.235.223 port 35316 Feb 14 00:24:13.864949 sshd[4310]: Received disconnect from 202.72.235.223 port 35316:11: Bye Bye [preauth] Feb 14 00:24:13.864949 sshd[4310]: Disconnected from invalid user casino 202.72.235.223 port 35316 [preauth] Feb 14 00:24:13.868280 systemd[1]: sshd@27-10.230.16.158:22-202.72.235.223:35316.service: Deactivated successfully. Feb 14 00:24:13.932513 sshd[4313]: Accepted publickey for core from 147.75.109.163 port 35418 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:24:13.935066 sshd[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:24:13.942436 systemd-logind[1482]: New session 26 of user core. Feb 14 00:24:13.949580 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 14 00:24:14.636131 sshd[4313]: pam_unix(sshd:session): session closed for user core Feb 14 00:24:14.641641 systemd[1]: sshd@28-10.230.16.158:22-147.75.109.163:35418.service: Deactivated successfully. Feb 14 00:24:14.643989 systemd[1]: session-26.scope: Deactivated successfully. Feb 14 00:24:14.645142 systemd-logind[1482]: Session 26 logged out. Waiting for processes to exit. Feb 14 00:24:14.646872 systemd-logind[1482]: Removed session 26. Feb 14 00:24:14.802015 systemd[1]: Started sshd@29-10.230.16.158:22-147.75.109.163:35434.service - OpenSSH per-connection server daemon (147.75.109.163:35434). Feb 14 00:24:15.697459 sshd[4328]: Accepted publickey for core from 147.75.109.163 port 35434 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:24:15.699984 sshd[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:24:15.707669 systemd-logind[1482]: New session 27 of user core. Feb 14 00:24:15.713601 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 14 00:24:18.303536 containerd[1503]: time="2025-02-14T00:24:18.303399339Z" level=info msg="StopContainer for \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\" with timeout 30 (s)" Feb 14 00:24:18.309447 containerd[1503]: time="2025-02-14T00:24:18.309406152Z" level=info msg="Stop container \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\" with signal terminated" Feb 14 00:24:18.372556 systemd[1]: cri-containerd-f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164.scope: Deactivated successfully. Feb 14 00:24:18.423522 containerd[1503]: time="2025-02-14T00:24:18.421896412Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 14 00:24:18.427592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164-rootfs.mount: Deactivated successfully. Feb 14 00:24:18.438915 containerd[1503]: time="2025-02-14T00:24:18.437858575Z" level=info msg="shim disconnected" id=f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164 namespace=k8s.io Feb 14 00:24:18.438915 containerd[1503]: time="2025-02-14T00:24:18.438032655Z" level=warning msg="cleaning up after shim disconnected" id=f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164 namespace=k8s.io Feb 14 00:24:18.438915 containerd[1503]: time="2025-02-14T00:24:18.438084483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:24:18.461280 containerd[1503]: time="2025-02-14T00:24:18.461209508Z" level=info msg="StopContainer for \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\" with timeout 2 (s)" Feb 14 00:24:18.462066 containerd[1503]: time="2025-02-14T00:24:18.461969964Z" level=info msg="Stop container \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\" with signal terminated" Feb 14 00:24:18.472676 containerd[1503]: time="2025-02-14T00:24:18.472592041Z" level=info msg="StopContainer for \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\" returns successfully" Feb 14 00:24:18.474782 containerd[1503]: time="2025-02-14T00:24:18.474647937Z" level=info msg="StopPodSandbox for \"547e07fb96ca04c924d479e5bb16b7a373f291b80fc21fb16176e1abd50cb985\"" Feb 14 00:24:18.475049 containerd[1503]: time="2025-02-14T00:24:18.474920455Z" level=info msg="Container to stop \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 14 00:24:18.478242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-547e07fb96ca04c924d479e5bb16b7a373f291b80fc21fb16176e1abd50cb985-shm.mount: Deactivated successfully. Feb 14 00:24:18.482532 systemd-networkd[1430]: lxc_health: Link DOWN Feb 14 00:24:18.482544 systemd-networkd[1430]: lxc_health: Lost carrier Feb 14 00:24:18.499970 systemd[1]: cri-containerd-547e07fb96ca04c924d479e5bb16b7a373f291b80fc21fb16176e1abd50cb985.scope: Deactivated successfully. Feb 14 00:24:18.512110 systemd[1]: cri-containerd-8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752.scope: Deactivated successfully. Feb 14 00:24:18.512833 systemd[1]: cri-containerd-8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752.scope: Consumed 10.644s CPU time. Feb 14 00:24:18.564006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-547e07fb96ca04c924d479e5bb16b7a373f291b80fc21fb16176e1abd50cb985-rootfs.mount: Deactivated successfully. Feb 14 00:24:18.568988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752-rootfs.mount: Deactivated successfully. Feb 14 00:24:18.574391 containerd[1503]: time="2025-02-14T00:24:18.573908097Z" level=info msg="shim disconnected" id=547e07fb96ca04c924d479e5bb16b7a373f291b80fc21fb16176e1abd50cb985 namespace=k8s.io Feb 14 00:24:18.574391 containerd[1503]: time="2025-02-14T00:24:18.574011075Z" level=warning msg="cleaning up after shim disconnected" id=547e07fb96ca04c924d479e5bb16b7a373f291b80fc21fb16176e1abd50cb985 namespace=k8s.io Feb 14 00:24:18.574391 containerd[1503]: time="2025-02-14T00:24:18.574027792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:24:18.575812 containerd[1503]: time="2025-02-14T00:24:18.575518401Z" level=info msg="shim disconnected" id=8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752 namespace=k8s.io Feb 14 00:24:18.575812 containerd[1503]: time="2025-02-14T00:24:18.575566018Z" level=warning msg="cleaning up after shim disconnected" id=8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752 namespace=k8s.io Feb 14 00:24:18.575812 containerd[1503]: time="2025-02-14T00:24:18.575583853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:24:18.604325 containerd[1503]: time="2025-02-14T00:24:18.604251060Z" level=info msg="StopContainer for \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\" returns successfully" Feb 14 00:24:18.604325 containerd[1503]: time="2025-02-14T00:24:18.604965138Z" level=info msg="StopPodSandbox for \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\"" Feb 14 00:24:18.604325 containerd[1503]: time="2025-02-14T00:24:18.605005040Z" level=info msg="Container to stop \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 14 00:24:18.604325 containerd[1503]: time="2025-02-14T00:24:18.605024663Z" level=info msg="Container to stop \"5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 14 00:24:18.604325 containerd[1503]: time="2025-02-14T00:24:18.605040245Z" level=info msg="Container to stop \"898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 14 00:24:18.604325 containerd[1503]: time="2025-02-14T00:24:18.605056009Z" level=info msg="Container to stop \"5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 14 00:24:18.604325 containerd[1503]: time="2025-02-14T00:24:18.605070603Z" level=info msg="Container to stop \"30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 14 00:24:18.617711 containerd[1503]: time="2025-02-14T00:24:18.617624162Z" level=info msg="TearDown network for sandbox \"547e07fb96ca04c924d479e5bb16b7a373f291b80fc21fb16176e1abd50cb985\" successfully" Feb 14 00:24:18.617930 containerd[1503]: time="2025-02-14T00:24:18.617900912Z" level=info msg="StopPodSandbox for \"547e07fb96ca04c924d479e5bb16b7a373f291b80fc21fb16176e1abd50cb985\" returns successfully" Feb 14 00:24:18.618970 systemd[1]: cri-containerd-56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2.scope: Deactivated successfully. Feb 14 00:24:18.659872 containerd[1503]: time="2025-02-14T00:24:18.659645247Z" level=info msg="shim disconnected" id=56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2 namespace=k8s.io Feb 14 00:24:18.660190 containerd[1503]: time="2025-02-14T00:24:18.659875984Z" level=warning msg="cleaning up after shim disconnected" id=56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2 namespace=k8s.io Feb 14 00:24:18.660190 containerd[1503]: time="2025-02-14T00:24:18.659930726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:24:18.681045 containerd[1503]: time="2025-02-14T00:24:18.680925766Z" level=warning msg="cleanup warnings time=\"2025-02-14T00:24:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 14 00:24:18.682878 containerd[1503]: time="2025-02-14T00:24:18.682817141Z" level=info msg="TearDown network for sandbox \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\" successfully" Feb 14 00:24:18.683046 containerd[1503]: time="2025-02-14T00:24:18.682858437Z" level=info msg="StopPodSandbox for \"56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2\" returns successfully" Feb 14 00:24:18.821612 kubelet[2729]: I0214 00:24:18.819892 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 00:24:18.821612 kubelet[2729]: I0214 00:24:18.821646 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-etc-cni-netd\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.824016 kubelet[2729]: I0214 00:24:18.821728 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-clustermesh-secrets\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.824016 kubelet[2729]: I0214 00:24:18.821765 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5539248e-3950-44cf-a782-76b5d5c13db3-cilium-config-path\") pod \"5539248e-3950-44cf-a782-76b5d5c13db3\" (UID: \"5539248e-3950-44cf-a782-76b5d5c13db3\") " Feb 14 00:24:18.824016 kubelet[2729]: I0214 00:24:18.821797 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvx96\" (UniqueName: \"kubernetes.io/projected/5539248e-3950-44cf-a782-76b5d5c13db3-kube-api-access-hvx96\") pod \"5539248e-3950-44cf-a782-76b5d5c13db3\" (UID: \"5539248e-3950-44cf-a782-76b5d5c13db3\") " Feb 14 00:24:18.824016 kubelet[2729]: I0214 00:24:18.821822 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-hostproc\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.824016 kubelet[2729]: I0214 00:24:18.821849 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cilium-run\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.824016 kubelet[2729]: I0214 00:24:18.821876 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-hubble-tls\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.824287 kubelet[2729]: I0214 00:24:18.821900 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-bpf-maps\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.824287 kubelet[2729]: I0214 00:24:18.821924 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-xtables-lock\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.824287 kubelet[2729]: I0214 00:24:18.821961 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cilium-cgroup\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.824287 kubelet[2729]: I0214 00:24:18.821987 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cni-path\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.824287 kubelet[2729]: I0214 00:24:18.822021 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tvhw\" (UniqueName: \"kubernetes.io/projected/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-kube-api-access-4tvhw\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.824287 kubelet[2729]: I0214 00:24:18.822047 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-host-proc-sys-net\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.825048 kubelet[2729]: I0214 00:24:18.822072 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-lib-modules\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.825048 kubelet[2729]: I0214 00:24:18.822112 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cilium-config-path\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.825048 kubelet[2729]: I0214 00:24:18.822153 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-host-proc-sys-kernel\") pod \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\" (UID: \"d5be48cb-2d0c-4ac8-8fbe-2270b551cd90\") " Feb 14 00:24:18.825048 kubelet[2729]: I0214 00:24:18.823904 2729 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-etc-cni-netd\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.825048 kubelet[2729]: I0214 00:24:18.824710 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 00:24:18.839040 kubelet[2729]: I0214 00:24:18.838523 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-hostproc" (OuterVolumeSpecName: "hostproc") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 00:24:18.839040 kubelet[2729]: I0214 00:24:18.838649 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 00:24:18.839604 kubelet[2729]: I0214 00:24:18.839572 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 00:24:18.839793 kubelet[2729]: I0214 00:24:18.839762 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 00:24:18.839857 kubelet[2729]: I0214 00:24:18.839825 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cni-path" (OuterVolumeSpecName: "cni-path") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 00:24:18.840102 kubelet[2729]: I0214 00:24:18.840063 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 00:24:18.840242 kubelet[2729]: I0214 00:24:18.840216 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 00:24:18.847381 kubelet[2729]: I0214 00:24:18.845984 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5539248e-3950-44cf-a782-76b5d5c13db3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5539248e-3950-44cf-a782-76b5d5c13db3" (UID: "5539248e-3950-44cf-a782-76b5d5c13db3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 00:24:18.848362 kubelet[2729]: I0214 00:24:18.848198 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 00:24:18.850545 kubelet[2729]: I0214 00:24:18.850511 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 00:24:18.851684 kubelet[2729]: I0214 00:24:18.850736 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5539248e-3950-44cf-a782-76b5d5c13db3-kube-api-access-hvx96" (OuterVolumeSpecName: "kube-api-access-hvx96") pod "5539248e-3950-44cf-a782-76b5d5c13db3" (UID: "5539248e-3950-44cf-a782-76b5d5c13db3"). InnerVolumeSpecName "kube-api-access-hvx96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 00:24:18.858048 kubelet[2729]: I0214 00:24:18.857991 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-kube-api-access-4tvhw" (OuterVolumeSpecName: "kube-api-access-4tvhw") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "kube-api-access-4tvhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 00:24:18.858453 kubelet[2729]: I0214 00:24:18.858410 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 00:24:18.861301 kubelet[2729]: I0214 00:24:18.861254 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" (UID: "d5be48cb-2d0c-4ac8-8fbe-2270b551cd90"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 00:24:18.924390 kubelet[2729]: I0214 00:24:18.924303 2729 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cilium-cgroup\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.924390 kubelet[2729]: I0214 00:24:18.924393 2729 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cni-path\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.924663 kubelet[2729]: I0214 00:24:18.924416 2729 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4tvhw\" (UniqueName: \"kubernetes.io/projected/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-kube-api-access-4tvhw\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.924663 kubelet[2729]: I0214 00:24:18.924438 2729 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-host-proc-sys-net\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.924663 kubelet[2729]: I0214 00:24:18.924458 2729 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cilium-config-path\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.924663 kubelet[2729]: I0214 00:24:18.924474 2729 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-host-proc-sys-kernel\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.924663 kubelet[2729]: I0214 00:24:18.924490 2729 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-lib-modules\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.924663 kubelet[2729]: I0214 00:24:18.924504 2729 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-clustermesh-secrets\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.924663 kubelet[2729]: I0214 00:24:18.924521 2729 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5539248e-3950-44cf-a782-76b5d5c13db3-cilium-config-path\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.925562 kubelet[2729]: I0214 00:24:18.924536 2729 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hvx96\" (UniqueName: \"kubernetes.io/projected/5539248e-3950-44cf-a782-76b5d5c13db3-kube-api-access-hvx96\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.925562 kubelet[2729]: I0214 00:24:18.924551 2729 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-hostproc\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.925562 kubelet[2729]: I0214 00:24:18.924566 2729 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-cilium-run\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.925562 kubelet[2729]: I0214 00:24:18.924580 2729 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-hubble-tls\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.925562 kubelet[2729]: I0214 00:24:18.924595 2729 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-bpf-maps\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:18.925562 kubelet[2729]: I0214 00:24:18.924609 2729 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90-xtables-lock\") on node \"srv-skbpq.gb1.brightbox.com\" DevicePath \"\"" Feb 14 00:24:19.111193 kubelet[2729]: I0214 00:24:19.111057 2729 scope.go:117] "RemoveContainer" containerID="f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164" Feb 14 00:24:19.119106 containerd[1503]: time="2025-02-14T00:24:19.118544527Z" level=info msg="RemoveContainer for \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\"" Feb 14 00:24:19.128528 containerd[1503]: time="2025-02-14T00:24:19.127993396Z" level=info msg="RemoveContainer for \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\" returns successfully" Feb 14 00:24:19.128779 systemd[1]: Removed slice kubepods-besteffort-pod5539248e_3950_44cf_a782_76b5d5c13db3.slice - libcontainer container kubepods-besteffort-pod5539248e_3950_44cf_a782_76b5d5c13db3.slice. Feb 14 00:24:19.129957 kubelet[2729]: I0214 00:24:19.128448 2729 scope.go:117] "RemoveContainer" containerID="f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164" Feb 14 00:24:19.131440 systemd[1]: Removed slice kubepods-burstable-podd5be48cb_2d0c_4ac8_8fbe_2270b551cd90.slice - libcontainer container kubepods-burstable-podd5be48cb_2d0c_4ac8_8fbe_2270b551cd90.slice. Feb 14 00:24:19.131841 systemd[1]: kubepods-burstable-podd5be48cb_2d0c_4ac8_8fbe_2270b551cd90.slice: Consumed 10.770s CPU time. Feb 14 00:24:19.142800 containerd[1503]: time="2025-02-14T00:24:19.133949059Z" level=error msg="ContainerStatus for \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\": not found" Feb 14 00:24:19.153366 kubelet[2729]: E0214 00:24:19.152997 2729 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\": not found" containerID="f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164" Feb 14 00:24:19.153366 kubelet[2729]: I0214 00:24:19.153102 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164"} err="failed to get container status \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3adf0e4d672217bd7ad79e3314ed3a4381f97c52f87259cf9e0e1121aa32164\": not found" Feb 14 00:24:19.153366 kubelet[2729]: I0214 00:24:19.153223 2729 scope.go:117] "RemoveContainer" containerID="8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752" Feb 14 00:24:19.161924 containerd[1503]: time="2025-02-14T00:24:19.161874081Z" level=info msg="RemoveContainer for \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\"" Feb 14 00:24:19.165963 containerd[1503]: time="2025-02-14T00:24:19.165833750Z" level=info msg="RemoveContainer for \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\" returns successfully" Feb 14 00:24:19.166059 kubelet[2729]: I0214 00:24:19.166027 2729 scope.go:117] "RemoveContainer" containerID="30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50" Feb 14 00:24:19.167727 containerd[1503]: time="2025-02-14T00:24:19.167675815Z" level=info msg="RemoveContainer for \"30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50\"" Feb 14 00:24:19.170483 containerd[1503]: time="2025-02-14T00:24:19.170439089Z" level=info msg="RemoveContainer for \"30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50\" returns successfully" Feb 14 00:24:19.170714 kubelet[2729]: I0214 00:24:19.170631 2729 scope.go:117] "RemoveContainer" containerID="5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2" Feb 14 00:24:19.174164 containerd[1503]: time="2025-02-14T00:24:19.174132484Z" level=info msg="RemoveContainer for \"5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2\"" Feb 14 00:24:19.179836 containerd[1503]: time="2025-02-14T00:24:19.179776277Z" level=info msg="RemoveContainer for \"5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2\" returns successfully" Feb 14 00:24:19.180173 kubelet[2729]: I0214 00:24:19.180136 2729 scope.go:117] "RemoveContainer" containerID="898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf" Feb 14 00:24:19.181926 containerd[1503]: time="2025-02-14T00:24:19.181537550Z" level=info msg="RemoveContainer for \"898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf\"" Feb 14 00:24:19.184551 containerd[1503]: time="2025-02-14T00:24:19.184430331Z" level=info msg="RemoveContainer for \"898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf\" returns successfully" Feb 14 00:24:19.184746 kubelet[2729]: I0214 00:24:19.184719 2729 scope.go:117] "RemoveContainer" containerID="5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48" Feb 14 00:24:19.187087 containerd[1503]: time="2025-02-14T00:24:19.187054221Z" level=info msg="RemoveContainer for \"5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48\"" Feb 14 00:24:19.190356 containerd[1503]: time="2025-02-14T00:24:19.190289310Z" level=info msg="RemoveContainer for \"5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48\" returns successfully" Feb 14 00:24:19.190630 kubelet[2729]: I0214 00:24:19.190592 2729 scope.go:117] "RemoveContainer" containerID="8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752" Feb 14 00:24:19.190989 containerd[1503]: time="2025-02-14T00:24:19.190919260Z" level=error msg="ContainerStatus for \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\": not found" Feb 14 00:24:19.191357 kubelet[2729]: E0214 00:24:19.191292 2729 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\": not found" containerID="8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752" Feb 14 00:24:19.191436 kubelet[2729]: I0214 00:24:19.191371 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752"} err="failed to get container status \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ea755f8e9c831832f4f4942c6b1aa9df7f6625f41829747014c32aea0398752\": not found" Feb 14 00:24:19.191436 kubelet[2729]: I0214 00:24:19.191406 2729 scope.go:117] "RemoveContainer" containerID="30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50" Feb 14 00:24:19.191646 containerd[1503]: time="2025-02-14T00:24:19.191585010Z" level=error msg="ContainerStatus for \"30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50\": not found" Feb 14 00:24:19.191780 kubelet[2729]: E0214 00:24:19.191749 2729 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50\": not found" containerID="30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50" Feb 14 00:24:19.191850 kubelet[2729]: I0214 00:24:19.191789 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50"} err="failed to get container status \"30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50\": rpc error: code = NotFound desc = an error occurred when try to find container \"30816ebdf63ad308a9b66b9f199d7c3633b5020e8314e2c04919cea828f8fd50\": not found" Feb 14 00:24:19.191850 kubelet[2729]: I0214 00:24:19.191813 2729 scope.go:117] "RemoveContainer" containerID="5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2" Feb 14 00:24:19.192356 containerd[1503]: time="2025-02-14T00:24:19.192109505Z" level=error msg="ContainerStatus for \"5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2\": not found" Feb 14 00:24:19.192558 kubelet[2729]: E0214 00:24:19.192529 2729 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2\": not found" containerID="5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2" Feb 14 00:24:19.192636 kubelet[2729]: I0214 00:24:19.192566 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2"} err="failed to get container status \"5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b821745f46c6632b332df679896fb654c121c6763f1519a634fa2da5ef008f2\": not found" Feb 14 00:24:19.192636 kubelet[2729]: I0214 00:24:19.192589 2729 scope.go:117] "RemoveContainer" containerID="898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf" Feb 14 00:24:19.192817 containerd[1503]: time="2025-02-14T00:24:19.192778493Z" level=error msg="ContainerStatus for \"898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf\": not found" Feb 14 00:24:19.192992 kubelet[2729]: E0214 00:24:19.192962 2729 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf\": not found" containerID="898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf" Feb 14 00:24:19.193085 kubelet[2729]: I0214 00:24:19.192992 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf"} err="failed to get container status \"898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"898d52876bed5c873d03baac3fba804fd03e3400149bb3b9b88bf424db5476cf\": not found" Feb 14 00:24:19.193185 kubelet[2729]: I0214 00:24:19.193128 2729 scope.go:117] "RemoveContainer" containerID="5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48" Feb 14 00:24:19.193636 containerd[1503]: time="2025-02-14T00:24:19.193581850Z" level=error msg="ContainerStatus for \"5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48\": not found" Feb 14 00:24:19.193812 kubelet[2729]: E0214 00:24:19.193776 2729 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48\": not found" containerID="5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48" Feb 14 00:24:19.193891 kubelet[2729]: I0214 00:24:19.193810 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48"} err="failed to get container status \"5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48\": rpc error: code = NotFound desc = an error occurred when try to find container \"5fc709643cc71b98ba327bd3961b6e5d3cc778ec4c602f6de3fce198b1a7ba48\": not found" Feb 14 00:24:19.383713 systemd[1]: var-lib-kubelet-pods-5539248e\x2d3950\x2d44cf\x2da782\x2d76b5d5c13db3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhvx96.mount: Deactivated successfully. Feb 14 00:24:19.383905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2-rootfs.mount: Deactivated successfully. Feb 14 00:24:19.384039 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56b9bcde22e6435c0b2184f2d891ac624bbadd0e4bdd99c7d90258714f6c94d2-shm.mount: Deactivated successfully. Feb 14 00:24:19.384164 systemd[1]: var-lib-kubelet-pods-d5be48cb\x2d2d0c\x2d4ac8\x2d8fbe\x2d2270b551cd90-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4tvhw.mount: Deactivated successfully. Feb 14 00:24:19.384319 systemd[1]: var-lib-kubelet-pods-d5be48cb\x2d2d0c\x2d4ac8\x2d8fbe\x2d2270b551cd90-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 14 00:24:19.385768 systemd[1]: var-lib-kubelet-pods-d5be48cb\x2d2d0c\x2d4ac8\x2d8fbe\x2d2270b551cd90-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 14 00:24:19.608183 kubelet[2729]: I0214 00:24:19.608117 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5539248e-3950-44cf-a782-76b5d5c13db3" path="/var/lib/kubelet/pods/5539248e-3950-44cf-a782-76b5d5c13db3/volumes" Feb 14 00:24:19.609492 kubelet[2729]: I0214 00:24:19.609450 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" path="/var/lib/kubelet/pods/d5be48cb-2d0c-4ac8-8fbe-2270b551cd90/volumes" Feb 14 00:24:19.748028 kubelet[2729]: E0214 00:24:19.747814 2729 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 14 00:24:20.367782 sshd[4328]: pam_unix(sshd:session): session closed for user core Feb 14 00:24:20.372397 systemd[1]: sshd@29-10.230.16.158:22-147.75.109.163:35434.service: Deactivated successfully. Feb 14 00:24:20.374955 systemd[1]: session-27.scope: Deactivated successfully. Feb 14 00:24:20.375252 systemd[1]: session-27.scope: Consumed 1.469s CPU time. Feb 14 00:24:20.377213 systemd-logind[1482]: Session 27 logged out. Waiting for processes to exit. Feb 14 00:24:20.379031 systemd-logind[1482]: Removed session 27. Feb 14 00:24:20.530963 systemd[1]: Started sshd@30-10.230.16.158:22-147.75.109.163:38748.service - OpenSSH per-connection server daemon (147.75.109.163:38748). Feb 14 00:24:21.428761 sshd[4495]: Accepted publickey for core from 147.75.109.163 port 38748 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:24:21.431703 sshd[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:24:21.440112 systemd-logind[1482]: New session 28 of user core. Feb 14 00:24:21.452719 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 14 00:24:22.645052 kubelet[2729]: I0214 00:24:22.644985 2729 setters.go:580] "Node became not ready" node="srv-skbpq.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-14T00:24:22Z","lastTransitionTime":"2025-02-14T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 14 00:24:23.364414 kubelet[2729]: I0214 00:24:23.363168 2729 topology_manager.go:215] "Topology Admit Handler" podUID="8a5ef875-95bb-4261-93df-fb93a9db05e2" podNamespace="kube-system" podName="cilium-hwrm9" Feb 14 00:24:23.364414 kubelet[2729]: E0214 00:24:23.363338 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5539248e-3950-44cf-a782-76b5d5c13db3" containerName="cilium-operator" Feb 14 00:24:23.364414 kubelet[2729]: E0214 00:24:23.363383 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" containerName="clean-cilium-state" Feb 14 00:24:23.364414 kubelet[2729]: E0214 00:24:23.363397 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" containerName="mount-cgroup" Feb 14 00:24:23.364414 kubelet[2729]: E0214 00:24:23.363409 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" containerName="apply-sysctl-overwrites" Feb 14 00:24:23.364414 kubelet[2729]: E0214 00:24:23.363419 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" containerName="mount-bpf-fs" Feb 14 00:24:23.364414 kubelet[2729]: E0214 00:24:23.363431 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" containerName="cilium-agent" Feb 14 00:24:23.369539 kubelet[2729]: I0214 00:24:23.368754 2729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5be48cb-2d0c-4ac8-8fbe-2270b551cd90" containerName="cilium-agent" Feb 14 00:24:23.369539 kubelet[2729]: I0214 00:24:23.368804 2729 memory_manager.go:354] "RemoveStaleState removing state" podUID="5539248e-3950-44cf-a782-76b5d5c13db3" containerName="cilium-operator" Feb 14 00:24:23.386289 systemd[1]: Created slice kubepods-burstable-pod8a5ef875_95bb_4261_93df_fb93a9db05e2.slice - libcontainer container kubepods-burstable-pod8a5ef875_95bb_4261_93df_fb93a9db05e2.slice. Feb 14 00:24:23.455374 kubelet[2729]: I0214 00:24:23.453384 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a5ef875-95bb-4261-93df-fb93a9db05e2-cni-path\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.455374 kubelet[2729]: I0214 00:24:23.453441 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8a5ef875-95bb-4261-93df-fb93a9db05e2-cilium-ipsec-secrets\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.455374 kubelet[2729]: I0214 00:24:23.453478 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a5ef875-95bb-4261-93df-fb93a9db05e2-host-proc-sys-net\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.455374 kubelet[2729]: I0214 00:24:23.453505 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a5ef875-95bb-4261-93df-fb93a9db05e2-hubble-tls\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.455374 kubelet[2729]: I0214 00:24:23.453532 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a5ef875-95bb-4261-93df-fb93a9db05e2-xtables-lock\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.455374 kubelet[2729]: I0214 00:24:23.453573 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a5ef875-95bb-4261-93df-fb93a9db05e2-bpf-maps\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.455794 kubelet[2729]: I0214 00:24:23.453599 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a5ef875-95bb-4261-93df-fb93a9db05e2-hostproc\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.455794 kubelet[2729]: I0214 00:24:23.453624 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a5ef875-95bb-4261-93df-fb93a9db05e2-etc-cni-netd\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.455794 kubelet[2729]: I0214 00:24:23.453661 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a5ef875-95bb-4261-93df-fb93a9db05e2-clustermesh-secrets\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.455794 kubelet[2729]: I0214 00:24:23.453687 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a5ef875-95bb-4261-93df-fb93a9db05e2-cilium-config-path\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.455794 kubelet[2729]: I0214 00:24:23.453725 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a5ef875-95bb-4261-93df-fb93a9db05e2-cilium-cgroup\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.455794 kubelet[2729]: I0214 00:24:23.453753 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a5ef875-95bb-4261-93df-fb93a9db05e2-lib-modules\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.456048 kubelet[2729]: I0214 00:24:23.453787 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a5ef875-95bb-4261-93df-fb93a9db05e2-host-proc-sys-kernel\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.456048 kubelet[2729]: I0214 00:24:23.453829 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a5ef875-95bb-4261-93df-fb93a9db05e2-cilium-run\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.456048 kubelet[2729]: I0214 00:24:23.453881 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clrn6\" (UniqueName: \"kubernetes.io/projected/8a5ef875-95bb-4261-93df-fb93a9db05e2-kube-api-access-clrn6\") pod \"cilium-hwrm9\" (UID: \"8a5ef875-95bb-4261-93df-fb93a9db05e2\") " pod="kube-system/cilium-hwrm9" Feb 14 00:24:23.520731 sshd[4495]: pam_unix(sshd:session): session closed for user core Feb 14 00:24:23.525988 systemd-logind[1482]: Session 28 logged out. Waiting for processes to exit. Feb 14 00:24:23.526423 systemd[1]: sshd@30-10.230.16.158:22-147.75.109.163:38748.service: Deactivated successfully. Feb 14 00:24:23.528732 systemd[1]: session-28.scope: Deactivated successfully. Feb 14 00:24:23.529020 systemd[1]: session-28.scope: Consumed 1.359s CPU time. Feb 14 00:24:23.530161 systemd-logind[1482]: Removed session 28. Feb 14 00:24:23.677761 systemd[1]: Started sshd@31-10.230.16.158:22-147.75.109.163:38760.service - OpenSSH per-connection server daemon (147.75.109.163:38760). Feb 14 00:24:23.692196 containerd[1503]: time="2025-02-14T00:24:23.692107020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hwrm9,Uid:8a5ef875-95bb-4261-93df-fb93a9db05e2,Namespace:kube-system,Attempt:0,}" Feb 14 00:24:23.735028 containerd[1503]: time="2025-02-14T00:24:23.733416123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:24:23.735028 containerd[1503]: time="2025-02-14T00:24:23.733554101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:24:23.735028 containerd[1503]: time="2025-02-14T00:24:23.733575576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:24:23.735028 containerd[1503]: time="2025-02-14T00:24:23.733757160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:24:23.758111 systemd[1]: Started cri-containerd-ad32eb41621660457e756872afe3f165a0b669cd679701b93a60c6fdf6eb7717.scope - libcontainer container ad32eb41621660457e756872afe3f165a0b669cd679701b93a60c6fdf6eb7717. Feb 14 00:24:23.794925 containerd[1503]: time="2025-02-14T00:24:23.794854244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hwrm9,Uid:8a5ef875-95bb-4261-93df-fb93a9db05e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad32eb41621660457e756872afe3f165a0b669cd679701b93a60c6fdf6eb7717\"" Feb 14 00:24:23.801998 containerd[1503]: time="2025-02-14T00:24:23.801759285Z" level=info msg="CreateContainer within sandbox \"ad32eb41621660457e756872afe3f165a0b669cd679701b93a60c6fdf6eb7717\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 14 00:24:23.814931 containerd[1503]: time="2025-02-14T00:24:23.814820388Z" level=info msg="CreateContainer within sandbox \"ad32eb41621660457e756872afe3f165a0b669cd679701b93a60c6fdf6eb7717\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1d8792a578015794865164a19d49314f43d21954541f31de3f99f829ccc897bb\"" Feb 14 00:24:23.816551 containerd[1503]: time="2025-02-14T00:24:23.816520672Z" level=info msg="StartContainer for \"1d8792a578015794865164a19d49314f43d21954541f31de3f99f829ccc897bb\"" Feb 14 00:24:23.857594 systemd[1]: Started cri-containerd-1d8792a578015794865164a19d49314f43d21954541f31de3f99f829ccc897bb.scope - libcontainer container 1d8792a578015794865164a19d49314f43d21954541f31de3f99f829ccc897bb. Feb 14 00:24:23.896732 containerd[1503]: time="2025-02-14T00:24:23.896683770Z" level=info msg="StartContainer for \"1d8792a578015794865164a19d49314f43d21954541f31de3f99f829ccc897bb\" returns successfully" Feb 14 00:24:23.918133 systemd[1]: cri-containerd-1d8792a578015794865164a19d49314f43d21954541f31de3f99f829ccc897bb.scope: Deactivated successfully. Feb 14 00:24:23.961561 containerd[1503]: time="2025-02-14T00:24:23.961147032Z" level=info msg="shim disconnected" id=1d8792a578015794865164a19d49314f43d21954541f31de3f99f829ccc897bb namespace=k8s.io Feb 14 00:24:23.961561 containerd[1503]: time="2025-02-14T00:24:23.961218604Z" level=warning msg="cleaning up after shim disconnected" id=1d8792a578015794865164a19d49314f43d21954541f31de3f99f829ccc897bb namespace=k8s.io Feb 14 00:24:23.961561 containerd[1503]: time="2025-02-14T00:24:23.961233408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:24:24.140796 containerd[1503]: time="2025-02-14T00:24:24.140743405Z" level=info msg="CreateContainer within sandbox \"ad32eb41621660457e756872afe3f165a0b669cd679701b93a60c6fdf6eb7717\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 14 00:24:24.155208 containerd[1503]: time="2025-02-14T00:24:24.155088282Z" level=info msg="CreateContainer within sandbox \"ad32eb41621660457e756872afe3f165a0b669cd679701b93a60c6fdf6eb7717\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ec0fa34a8efb481ed1024a3fff6f91ce9f74c55b1fed7aef14532d3c7c15efd1\"" Feb 14 00:24:24.159061 containerd[1503]: time="2025-02-14T00:24:24.156224846Z" level=info msg="StartContainer for \"ec0fa34a8efb481ed1024a3fff6f91ce9f74c55b1fed7aef14532d3c7c15efd1\"" Feb 14 00:24:24.199579 systemd[1]: Started cri-containerd-ec0fa34a8efb481ed1024a3fff6f91ce9f74c55b1fed7aef14532d3c7c15efd1.scope - libcontainer container ec0fa34a8efb481ed1024a3fff6f91ce9f74c55b1fed7aef14532d3c7c15efd1. Feb 14 00:24:24.242012 containerd[1503]: time="2025-02-14T00:24:24.241786321Z" level=info msg="StartContainer for \"ec0fa34a8efb481ed1024a3fff6f91ce9f74c55b1fed7aef14532d3c7c15efd1\" returns successfully" Feb 14 00:24:24.254240 systemd[1]: cri-containerd-ec0fa34a8efb481ed1024a3fff6f91ce9f74c55b1fed7aef14532d3c7c15efd1.scope: Deactivated successfully. Feb 14 00:24:24.283115 containerd[1503]: time="2025-02-14T00:24:24.282964069Z" level=info msg="shim disconnected" id=ec0fa34a8efb481ed1024a3fff6f91ce9f74c55b1fed7aef14532d3c7c15efd1 namespace=k8s.io Feb 14 00:24:24.283115 containerd[1503]: time="2025-02-14T00:24:24.283053637Z" level=warning msg="cleaning up after shim disconnected" id=ec0fa34a8efb481ed1024a3fff6f91ce9f74c55b1fed7aef14532d3c7c15efd1 namespace=k8s.io Feb 14 00:24:24.283115 containerd[1503]: time="2025-02-14T00:24:24.283069674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:24:24.305994 containerd[1503]: time="2025-02-14T00:24:24.305514305Z" level=warning msg="cleanup warnings time=\"2025-02-14T00:24:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 14 00:24:24.585817 sshd[4511]: Accepted publickey for core from 147.75.109.163 port 38760 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:24:24.588305 sshd[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:24:24.595340 systemd-logind[1482]: New session 29 of user core. Feb 14 00:24:24.600593 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 14 00:24:24.749391 kubelet[2729]: E0214 00:24:24.749257 2729 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 14 00:24:25.140778 containerd[1503]: time="2025-02-14T00:24:25.140682072Z" level=info msg="CreateContainer within sandbox \"ad32eb41621660457e756872afe3f165a0b669cd679701b93a60c6fdf6eb7717\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 14 00:24:25.165299 containerd[1503]: time="2025-02-14T00:24:25.165242237Z" level=info msg="CreateContainer within sandbox \"ad32eb41621660457e756872afe3f165a0b669cd679701b93a60c6fdf6eb7717\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"40bfd3da678b069bdba5e4545bc535554b7913afbe74d780738dc312feb2927a\"" Feb 14 00:24:25.166627 containerd[1503]: time="2025-02-14T00:24:25.166569873Z" level=info msg="StartContainer for \"40bfd3da678b069bdba5e4545bc535554b7913afbe74d780738dc312feb2927a\"" Feb 14 00:24:25.202572 sshd[4511]: pam_unix(sshd:session): session closed for user core Feb 14 00:24:25.211873 systemd[1]: sshd@31-10.230.16.158:22-147.75.109.163:38760.service: Deactivated successfully. Feb 14 00:24:25.217207 systemd[1]: session-29.scope: Deactivated successfully. Feb 14 00:24:25.222255 systemd-logind[1482]: Session 29 logged out. Waiting for processes to exit. Feb 14 00:24:25.228580 systemd[1]: Started cri-containerd-40bfd3da678b069bdba5e4545bc535554b7913afbe74d780738dc312feb2927a.scope - libcontainer container 40bfd3da678b069bdba5e4545bc535554b7913afbe74d780738dc312feb2927a. Feb 14 00:24:25.230923 systemd-logind[1482]: Removed session 29. Feb 14 00:24:25.272173 containerd[1503]: time="2025-02-14T00:24:25.270922820Z" level=info msg="StartContainer for \"40bfd3da678b069bdba5e4545bc535554b7913afbe74d780738dc312feb2927a\" returns successfully" Feb 14 00:24:25.277151 systemd[1]: cri-containerd-40bfd3da678b069bdba5e4545bc535554b7913afbe74d780738dc312feb2927a.scope: Deactivated successfully. Feb 14 00:24:25.311281 containerd[1503]: time="2025-02-14T00:24:25.311189883Z" level=info msg="shim disconnected" id=40bfd3da678b069bdba5e4545bc535554b7913afbe74d780738dc312feb2927a namespace=k8s.io Feb 14 00:24:25.311944 containerd[1503]: time="2025-02-14T00:24:25.311332068Z" level=warning msg="cleaning up after shim disconnected" id=40bfd3da678b069bdba5e4545bc535554b7913afbe74d780738dc312feb2927a namespace=k8s.io Feb 14 00:24:25.311944 containerd[1503]: time="2025-02-14T00:24:25.311665960Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:24:25.370752 systemd[1]: Started sshd@32-10.230.16.158:22-147.75.109.163:38772.service - OpenSSH per-connection server daemon (147.75.109.163:38772). Feb 14 00:24:25.566753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40bfd3da678b069bdba5e4545bc535554b7913afbe74d780738dc312feb2927a-rootfs.mount: Deactivated successfully. Feb 14 00:24:26.144733 containerd[1503]: time="2025-02-14T00:24:26.144530433Z" level=info msg="CreateContainer within sandbox \"ad32eb41621660457e756872afe3f165a0b669cd679701b93a60c6fdf6eb7717\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 14 00:24:26.167529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount340003435.mount: Deactivated successfully. Feb 14 00:24:26.174712 containerd[1503]: time="2025-02-14T00:24:26.174371336Z" level=info msg="CreateContainer within sandbox \"ad32eb41621660457e756872afe3f165a0b669cd679701b93a60c6fdf6eb7717\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5d68b09239b5f12715a72faa110b7942c1df09ebc9fc9880653ad9d8048ac041\"" Feb 14 00:24:26.178399 containerd[1503]: time="2025-02-14T00:24:26.177610443Z" level=info msg="StartContainer for \"5d68b09239b5f12715a72faa110b7942c1df09ebc9fc9880653ad9d8048ac041\"" Feb 14 00:24:26.233575 systemd[1]: Started cri-containerd-5d68b09239b5f12715a72faa110b7942c1df09ebc9fc9880653ad9d8048ac041.scope - libcontainer container 5d68b09239b5f12715a72faa110b7942c1df09ebc9fc9880653ad9d8048ac041. Feb 14 00:24:26.253829 sshd[4744]: Accepted publickey for core from 147.75.109.163 port 38772 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 14 00:24:26.260050 sshd[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:24:26.273383 systemd-logind[1482]: New session 30 of user core. Feb 14 00:24:26.276555 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 14 00:24:26.276902 systemd[1]: cri-containerd-5d68b09239b5f12715a72faa110b7942c1df09ebc9fc9880653ad9d8048ac041.scope: Deactivated successfully. Feb 14 00:24:26.279701 containerd[1503]: time="2025-02-14T00:24:26.279644518Z" level=info msg="StartContainer for \"5d68b09239b5f12715a72faa110b7942c1df09ebc9fc9880653ad9d8048ac041\" returns successfully" Feb 14 00:24:26.311188 containerd[1503]: time="2025-02-14T00:24:26.311111354Z" level=info msg="shim disconnected" id=5d68b09239b5f12715a72faa110b7942c1df09ebc9fc9880653ad9d8048ac041 namespace=k8s.io Feb 14 00:24:26.311745 containerd[1503]: time="2025-02-14T00:24:26.311503296Z" level=warning msg="cleaning up after shim disconnected" id=5d68b09239b5f12715a72faa110b7942c1df09ebc9fc9880653ad9d8048ac041 namespace=k8s.io Feb 14 00:24:26.311745 containerd[1503]: time="2025-02-14T00:24:26.311530985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:24:26.568896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d68b09239b5f12715a72faa110b7942c1df09ebc9fc9880653ad9d8048ac041-rootfs.mount: Deactivated successfully. Feb 14 00:24:27.153722 containerd[1503]: time="2025-02-14T00:24:27.153534914Z" level=info msg="CreateContainer within sandbox \"ad32eb41621660457e756872afe3f165a0b669cd679701b93a60c6fdf6eb7717\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 14 00:24:27.184636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount646719496.mount: Deactivated successfully. Feb 14 00:24:27.196444 containerd[1503]: time="2025-02-14T00:24:27.196312332Z" level=info msg="CreateContainer within sandbox \"ad32eb41621660457e756872afe3f165a0b669cd679701b93a60c6fdf6eb7717\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"93c1959af8e37bf88b3ac556101f5d468608edd9a999eecfd9c1dc565c76d374\"" Feb 14 00:24:27.197282 containerd[1503]: time="2025-02-14T00:24:27.197245850Z" level=info msg="StartContainer for \"93c1959af8e37bf88b3ac556101f5d468608edd9a999eecfd9c1dc565c76d374\"" Feb 14 00:24:27.242784 systemd[1]: Started cri-containerd-93c1959af8e37bf88b3ac556101f5d468608edd9a999eecfd9c1dc565c76d374.scope - libcontainer container 93c1959af8e37bf88b3ac556101f5d468608edd9a999eecfd9c1dc565c76d374. Feb 14 00:24:27.287384 containerd[1503]: time="2025-02-14T00:24:27.287215324Z" level=info msg="StartContainer for \"93c1959af8e37bf88b3ac556101f5d468608edd9a999eecfd9c1dc565c76d374\" returns successfully" Feb 14 00:24:28.020532 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 14 00:24:31.316719 systemd[1]: run-containerd-runc-k8s.io-93c1959af8e37bf88b3ac556101f5d468608edd9a999eecfd9c1dc565c76d374-runc.aGFrSa.mount: Deactivated successfully. Feb 14 00:24:31.803801 systemd-networkd[1430]: lxc_health: Link UP Feb 14 00:24:31.821706 systemd-networkd[1430]: lxc_health: Gained carrier Feb 14 00:24:33.652258 systemd[1]: run-containerd-runc-k8s.io-93c1959af8e37bf88b3ac556101f5d468608edd9a999eecfd9c1dc565c76d374-runc.oUsK3h.mount: Deactivated successfully. Feb 14 00:24:33.725623 systemd-networkd[1430]: lxc_health: Gained IPv6LL Feb 14 00:24:33.730059 kubelet[2729]: I0214 00:24:33.729480 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hwrm9" podStartSLOduration=10.729452499 podStartE2EDuration="10.729452499s" podCreationTimestamp="2025-02-14 00:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:24:28.234139411 +0000 UTC m=+148.781983156" watchObservedRunningTime="2025-02-14 00:24:33.729452499 +0000 UTC m=+154.277296243" Feb 14 00:24:33.843887 kubelet[2729]: E0214 00:24:33.843806 2729 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53386->127.0.0.1:46625: write tcp 127.0.0.1:53386->127.0.0.1:46625: write: broken pipe Feb 14 00:24:40.467303 systemd[1]: run-containerd-runc-k8s.io-93c1959af8e37bf88b3ac556101f5d468608edd9a999eecfd9c1dc565c76d374-runc.R3CX7d.mount: Deactivated successfully. Feb 14 00:24:40.728211 sshd[4744]: pam_unix(sshd:session): session closed for user core Feb 14 00:24:40.735237 systemd[1]: sshd@32-10.230.16.158:22-147.75.109.163:38772.service: Deactivated successfully. Feb 14 00:24:40.738518 systemd[1]: session-30.scope: Deactivated successfully. Feb 14 00:24:40.740576 systemd-logind[1482]: Session 30 logged out. Waiting for processes to exit. Feb 14 00:24:40.742581 systemd-logind[1482]: Removed session 30.