Feb 13 19:20:06.030723 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:40:15 -00 2025 Feb 13 19:20:06.030782 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:20:06.030797 kernel: BIOS-provided physical RAM map: Feb 13 19:20:06.030823 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:20:06.030834 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:20:06.030844 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:20:06.030855 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 13 19:20:06.030866 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 13 19:20:06.030876 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:20:06.030886 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 19:20:06.030896 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:20:06.030907 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:20:06.030935 kernel: NX (Execute Disable) protection: active Feb 13 19:20:06.030947 kernel: APIC: Static calls initialized Feb 13 19:20:06.030959 kernel: SMBIOS 2.8 present. Feb 13 19:20:06.030975 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 13 19:20:06.030988 kernel: Hypervisor detected: KVM Feb 13 19:20:06.031011 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:20:06.031023 kernel: kvm-clock: using sched offset of 5459184560 cycles Feb 13 19:20:06.031035 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:20:06.031047 kernel: tsc: Detected 2799.998 MHz processor Feb 13 19:20:06.031058 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:20:06.031070 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:20:06.031081 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 13 19:20:06.031092 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:20:06.031114 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:20:06.031141 kernel: Using GB pages for direct mapping Feb 13 19:20:06.031153 kernel: ACPI: Early table checksum verification disabled Feb 13 19:20:06.031165 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 13 19:20:06.031176 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:20:06.031187 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:20:06.031199 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:20:06.031210 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 13 19:20:06.031221 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:20:06.031232 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:20:06.031259 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:20:06.031271 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:20:06.031283 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 13 19:20:06.031294 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 13 19:20:06.031305 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 13 19:20:06.031331 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 13 19:20:06.031343 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 13 19:20:06.031367 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 13 19:20:06.031380 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 13 19:20:06.031391 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:20:06.031408 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:20:06.031420 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 19:20:06.031432 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 13 19:20:06.031444 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 19:20:06.031455 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 13 19:20:06.031481 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 19:20:06.031493 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 13 19:20:06.031504 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 19:20:06.031516 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 13 19:20:06.031527 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 19:20:06.031539 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 13 19:20:06.031551 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 19:20:06.031562 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 13 19:20:06.031578 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 19:20:06.031591 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 13 19:20:06.031615 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 19:20:06.031628 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 19:20:06.031639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 13 19:20:06.031652 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 13 19:20:06.031664 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 13 19:20:06.031676 kernel: Zone ranges: Feb 13 19:20:06.031687 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:20:06.031699 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 13 19:20:06.031711 kernel: Normal empty Feb 13 19:20:06.031768 kernel: Movable zone start for each node Feb 13 19:20:06.031780 kernel: Early memory node ranges Feb 13 19:20:06.031792 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:20:06.031804 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 13 19:20:06.031815 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 13 19:20:06.031827 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:20:06.031839 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:20:06.031856 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 13 19:20:06.031869 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:20:06.031896 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:20:06.031908 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:20:06.031920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:20:06.031933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:20:06.031945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:20:06.031957 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:20:06.031968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:20:06.031980 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:20:06.031992 kernel: TSC deadline timer available Feb 13 19:20:06.032017 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 13 19:20:06.032030 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:20:06.032041 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 19:20:06.032053 kernel: Booting paravirtualized kernel on KVM Feb 13 19:20:06.032065 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:20:06.032077 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 19:20:06.032089 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 19:20:06.032110 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 19:20:06.032123 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 19:20:06.032149 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:20:06.032162 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:20:06.032175 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:20:06.032187 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:20:06.032199 kernel: random: crng init done Feb 13 19:20:06.032210 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:20:06.032222 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:20:06.032234 kernel: Fallback order for Node 0: 0 Feb 13 19:20:06.032259 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 13 19:20:06.032276 kernel: Policy zone: DMA32 Feb 13 19:20:06.032288 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:20:06.032300 kernel: software IO TLB: area num 16. Feb 13 19:20:06.032312 kernel: Memory: 1899480K/2096616K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 196876K reserved, 0K cma-reserved) Feb 13 19:20:06.032324 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 19:20:06.032336 kernel: Kernel/User page tables isolation: enabled Feb 13 19:20:06.032348 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:20:06.032360 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:20:06.032386 kernel: Dynamic Preempt: voluntary Feb 13 19:20:06.032398 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:20:06.032411 kernel: rcu: RCU event tracing is enabled. Feb 13 19:20:06.032423 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 19:20:06.032435 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:20:06.032476 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:20:06.032501 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:20:06.032514 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:20:06.032526 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 19:20:06.032539 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 13 19:20:06.032551 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:20:06.032563 kernel: Console: colour VGA+ 80x25 Feb 13 19:20:06.032591 kernel: printk: console [tty0] enabled Feb 13 19:20:06.032604 kernel: printk: console [ttyS0] enabled Feb 13 19:20:06.032616 kernel: ACPI: Core revision 20230628 Feb 13 19:20:06.032628 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:20:06.032640 kernel: x2apic enabled Feb 13 19:20:06.032666 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:20:06.032683 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Feb 13 19:20:06.032696 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Feb 13 19:20:06.032709 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:20:06.032721 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 19:20:06.032750 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 19:20:06.032763 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:20:06.032775 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:20:06.032787 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:20:06.032800 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:20:06.032828 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 19:20:06.032841 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:20:06.032853 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:20:06.032865 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 19:20:06.032877 kernel: MMIO Stale Data: Unknown: No mitigations Feb 13 19:20:06.032889 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 13 19:20:06.032901 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:20:06.032914 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:20:06.032926 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:20:06.032938 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:20:06.032963 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 19:20:06.032977 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:20:06.032993 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:20:06.033007 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:20:06.033019 kernel: landlock: Up and running. Feb 13 19:20:06.033031 kernel: SELinux: Initializing. Feb 13 19:20:06.033043 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:20:06.033055 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:20:06.033068 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Feb 13 19:20:06.033080 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 19:20:06.033092 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 19:20:06.033127 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 19:20:06.033141 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Feb 13 19:20:06.033153 kernel: signal: max sigframe size: 1776 Feb 13 19:20:06.033165 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:20:06.033178 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:20:06.033190 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:20:06.033203 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:20:06.033215 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:20:06.033227 kernel: .... node #0, CPUs: #1 Feb 13 19:20:06.033253 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 19:20:06.033266 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:20:06.033279 kernel: smpboot: Max logical packages: 16 Feb 13 19:20:06.033291 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Feb 13 19:20:06.033303 kernel: devtmpfs: initialized Feb 13 19:20:06.033316 kernel: x86/mm: Memory block size: 128MB Feb 13 19:20:06.033328 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:20:06.033340 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 19:20:06.033353 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:20:06.033378 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:20:06.033391 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:20:06.033404 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:20:06.033416 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:20:06.033428 kernel: audit: type=2000 audit(1739474404.836:1): state=initialized audit_enabled=0 res=1 Feb 13 19:20:06.033441 kernel: cpuidle: using governor menu Feb 13 19:20:06.033453 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:20:06.033465 kernel: dca service started, version 1.12.1 Feb 13 19:20:06.033478 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:20:06.033504 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:20:06.033521 kernel: PCI: Using configuration type 1 for base access Feb 13 19:20:06.033535 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:20:06.033547 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:20:06.033560 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:20:06.033572 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:20:06.033584 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:20:06.033597 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:20:06.033609 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:20:06.033635 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:20:06.033648 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:20:06.033660 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:20:06.033673 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:20:06.033685 kernel: ACPI: Interpreter enabled Feb 13 19:20:06.033697 kernel: ACPI: PM: (supports S0 S5) Feb 13 19:20:06.033709 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:20:06.033722 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:20:06.033759 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:20:06.033789 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:20:06.033802 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:20:06.034082 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:20:06.034283 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:20:06.034457 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:20:06.034476 kernel: PCI host bridge to bus 0000:00 Feb 13 19:20:06.034679 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:20:06.034888 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:20:06.035049 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:20:06.035247 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 13 19:20:06.035409 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:20:06.035612 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 13 19:20:06.036833 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:20:06.037058 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:20:06.037294 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 13 19:20:06.037480 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 13 19:20:06.037663 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 13 19:20:06.037864 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 13 19:20:06.038035 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:20:06.038241 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 19:20:06.038435 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 13 19:20:06.038632 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 19:20:06.040896 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 13 19:20:06.041090 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 19:20:06.041286 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 13 19:20:06.041505 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 19:20:06.041748 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 13 19:20:06.041993 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 19:20:06.042183 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 13 19:20:06.042376 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 19:20:06.042550 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 13 19:20:06.044689 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 19:20:06.044942 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 13 19:20:06.045155 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 19:20:06.045334 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 13 19:20:06.045552 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:20:06.045740 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 19:20:06.045922 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 13 19:20:06.046093 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 13 19:20:06.046300 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 13 19:20:06.046482 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:20:06.046656 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 19:20:06.049891 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 13 19:20:06.050078 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 13 19:20:06.050283 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:20:06.050464 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:20:06.050688 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:20:06.050885 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 13 19:20:06.051070 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 13 19:20:06.051292 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:20:06.051468 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 19:20:06.051689 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 13 19:20:06.052959 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 13 19:20:06.053157 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 19:20:06.053334 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 19:20:06.053506 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 19:20:06.053708 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 19:20:06.054970 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 13 19:20:06.055200 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 13 19:20:06.055382 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 19:20:06.055561 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 19:20:06.056689 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 19:20:06.056893 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 13 19:20:06.057067 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 19:20:06.057254 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 19:20:06.057423 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 19:20:06.057651 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 19:20:06.059958 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 13 19:20:06.060151 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 19:20:06.060325 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 19:20:06.060494 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 19:20:06.060667 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 19:20:06.062890 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 19:20:06.063115 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 19:20:06.063303 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 19:20:06.063490 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 19:20:06.063676 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 19:20:06.063903 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 19:20:06.064089 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 19:20:06.064275 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 19:20:06.064449 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 19:20:06.064647 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 19:20:06.068865 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 19:20:06.069058 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 19:20:06.069268 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 19:20:06.069444 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 19:20:06.069465 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:20:06.069478 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:20:06.069491 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:20:06.069526 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:20:06.069541 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:20:06.069553 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:20:06.069565 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:20:06.069578 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:20:06.069590 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:20:06.069603 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:20:06.069615 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:20:06.069628 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:20:06.069656 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:20:06.069669 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:20:06.069682 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:20:06.069695 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:20:06.069712 kernel: iommu: Default domain type: Translated Feb 13 19:20:06.071762 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:20:06.071777 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:20:06.071789 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:20:06.071802 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:20:06.071833 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 13 19:20:06.072023 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:20:06.072221 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:20:06.072394 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:20:06.072414 kernel: vgaarb: loaded Feb 13 19:20:06.072427 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:20:06.072440 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:20:06.072453 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:20:06.072465 kernel: pnp: PnP ACPI init Feb 13 19:20:06.072672 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:20:06.072693 kernel: pnp: PnP ACPI: found 5 devices Feb 13 19:20:06.072706 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:20:06.072719 kernel: NET: Registered PF_INET protocol family Feb 13 19:20:06.074185 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:20:06.074216 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 19:20:06.074240 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:20:06.074262 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:20:06.074314 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 19:20:06.074336 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 19:20:06.074358 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:20:06.074380 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:20:06.074401 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:20:06.074423 kernel: NET: Registered PF_XDP protocol family Feb 13 19:20:06.074681 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 13 19:20:06.074885 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 19:20:06.075098 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 19:20:06.075288 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 19:20:06.075461 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 19:20:06.075636 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 19:20:06.077845 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 19:20:06.078041 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 19:20:06.078269 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 19:20:06.078448 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 19:20:06.078624 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 19:20:06.078820 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 19:20:06.078994 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 19:20:06.079218 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 19:20:06.079392 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 19:20:06.079589 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 19:20:06.079862 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 19:20:06.080063 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 19:20:06.080250 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 19:20:06.080423 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 19:20:06.080595 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 19:20:06.080847 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 19:20:06.081024 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 19:20:06.081213 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 19:20:06.081412 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 19:20:06.081586 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 19:20:06.081824 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 19:20:06.082013 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 19:20:06.082220 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 19:20:06.082416 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 19:20:06.082619 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 19:20:06.082808 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 19:20:06.082992 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 19:20:06.083197 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 19:20:06.083369 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 19:20:06.083539 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 19:20:06.083710 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 19:20:06.083961 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 19:20:06.084169 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 19:20:06.084365 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 19:20:06.084537 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 19:20:06.084708 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 19:20:06.084894 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 19:20:06.085075 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 19:20:06.085293 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 19:20:06.085494 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 19:20:06.085667 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 19:20:06.085910 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 19:20:06.086120 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 19:20:06.086294 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 19:20:06.086480 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:20:06.086637 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:20:06.086840 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:20:06.087020 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 13 19:20:06.087191 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:20:06.087348 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 13 19:20:06.087535 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 19:20:06.087701 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 13 19:20:06.087883 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 19:20:06.088058 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 13 19:20:06.088271 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 13 19:20:06.088435 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 13 19:20:06.088596 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 19:20:06.088811 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 13 19:20:06.088976 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 13 19:20:06.089149 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 19:20:06.089352 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 19:20:06.089549 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 13 19:20:06.089711 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 19:20:06.089915 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 13 19:20:06.090126 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 13 19:20:06.090293 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 19:20:06.090499 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 13 19:20:06.090688 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 13 19:20:06.090877 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 19:20:06.091054 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 13 19:20:06.091234 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 13 19:20:06.091421 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 19:20:06.091637 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 13 19:20:06.091868 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 13 19:20:06.092054 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 19:20:06.092075 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:20:06.092089 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:20:06.092112 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 19:20:06.092127 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 13 19:20:06.092141 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:20:06.092154 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Feb 13 19:20:06.092167 kernel: Initialise system trusted keyrings Feb 13 19:20:06.092198 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 19:20:06.092212 kernel: Key type asymmetric registered Feb 13 19:20:06.092225 kernel: Asymmetric key parser 'x509' registered Feb 13 19:20:06.092238 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:20:06.092251 kernel: io scheduler mq-deadline registered Feb 13 19:20:06.092264 kernel: io scheduler kyber registered Feb 13 19:20:06.092277 kernel: io scheduler bfq registered Feb 13 19:20:06.092448 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 13 19:20:06.092621 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 13 19:20:06.092830 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:20:06.093026 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 13 19:20:06.093234 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 13 19:20:06.093405 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:20:06.093596 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 13 19:20:06.093798 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 13 19:20:06.093994 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:20:06.094184 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 13 19:20:06.094356 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 13 19:20:06.094526 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:20:06.094697 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 13 19:20:06.094888 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 13 19:20:06.095082 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:20:06.095267 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 13 19:20:06.095438 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 13 19:20:06.095633 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:20:06.095852 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 13 19:20:06.096025 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 13 19:20:06.096232 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:20:06.096415 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 13 19:20:06.096598 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 13 19:20:06.096791 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:20:06.096814 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:20:06.096828 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:20:06.096863 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:20:06.096878 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:20:06.096891 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:20:06.096904 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:20:06.096917 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:20:06.096930 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:20:06.096944 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:20:06.097133 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 19:20:06.097298 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 19:20:06.097486 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T19:20:05 UTC (1739474405) Feb 13 19:20:06.097648 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 19:20:06.097668 kernel: intel_pstate: CPU model not supported Feb 13 19:20:06.097681 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:20:06.097695 kernel: Segment Routing with IPv6 Feb 13 19:20:06.097708 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:20:06.097721 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:20:06.097769 kernel: Key type dns_resolver registered Feb 13 19:20:06.097796 kernel: IPI shorthand broadcast: enabled Feb 13 19:20:06.097811 kernel: sched_clock: Marking stable (1470003497, 218918495)->(1827434439, -138512447) Feb 13 19:20:06.097824 kernel: registered taskstats version 1 Feb 13 19:20:06.097837 kernel: Loading compiled-in X.509 certificates Feb 13 19:20:06.097850 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6c364ddae48101e091a28279a8d953535f596d53' Feb 13 19:20:06.097863 kernel: Key type .fscrypt registered Feb 13 19:20:06.097875 kernel: Key type fscrypt-provisioning registered Feb 13 19:20:06.097888 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:20:06.097901 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:20:06.097928 kernel: ima: No architecture policies found Feb 13 19:20:06.097941 kernel: clk: Disabling unused clocks Feb 13 19:20:06.097954 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 19:20:06.097968 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:20:06.097980 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 19:20:06.097994 kernel: Run /init as init process Feb 13 19:20:06.098006 kernel: with arguments: Feb 13 19:20:06.098019 kernel: /init Feb 13 19:20:06.098032 kernel: with environment: Feb 13 19:20:06.098058 kernel: HOME=/ Feb 13 19:20:06.098072 kernel: TERM=linux Feb 13 19:20:06.098084 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:20:06.098099 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:20:06.098127 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:20:06.098141 systemd[1]: Detected virtualization kvm. Feb 13 19:20:06.098154 systemd[1]: Detected architecture x86-64. Feb 13 19:20:06.098168 systemd[1]: Running in initrd. Feb 13 19:20:06.098197 systemd[1]: No hostname configured, using default hostname. Feb 13 19:20:06.098212 systemd[1]: Hostname set to . Feb 13 19:20:06.098225 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:20:06.098239 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:20:06.098252 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:20:06.098266 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:20:06.098280 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:20:06.098295 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:20:06.098324 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:20:06.098339 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:20:06.098354 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:20:06.098368 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:20:06.098395 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:20:06.098408 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:20:06.098435 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:20:06.098450 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:20:06.098463 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:20:06.098476 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:20:06.098489 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:20:06.098503 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:20:06.098516 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:20:06.098529 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:20:06.098542 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:20:06.098569 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:20:06.098583 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:20:06.098597 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:20:06.098622 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:20:06.098636 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:20:06.098650 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:20:06.098664 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:20:06.098677 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:20:06.098691 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:20:06.098719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:20:06.098770 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:20:06.098828 systemd-journald[202]: Collecting audit messages is disabled. Feb 13 19:20:06.098861 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:20:06.098895 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:20:06.098910 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:20:06.098924 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:20:06.098937 kernel: Bridge firewalling registered Feb 13 19:20:06.098972 systemd-journald[202]: Journal started Feb 13 19:20:06.098998 systemd-journald[202]: Runtime Journal (/run/log/journal/9bf7e7b82e334da1af9d25b49f42337a) is 4.7M, max 37.9M, 33.2M free. Feb 13 19:20:06.039097 systemd-modules-load[203]: Inserted module 'overlay' Feb 13 19:20:06.137278 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:20:06.077182 systemd-modules-load[203]: Inserted module 'br_netfilter' Feb 13 19:20:06.138290 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:20:06.139408 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:20:06.140833 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:20:06.155025 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:20:06.156919 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:20:06.160585 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:20:06.171358 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:20:06.188190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:20:06.189170 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:20:06.193655 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:20:06.195236 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:20:06.202015 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:20:06.206934 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:20:06.219192 dracut-cmdline[237]: dracut-dracut-053 Feb 13 19:20:06.224930 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:20:06.259309 systemd-resolved[239]: Positive Trust Anchors: Feb 13 19:20:06.259327 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:20:06.259368 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:20:06.262844 systemd-resolved[239]: Defaulting to hostname 'linux'. Feb 13 19:20:06.264462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:20:06.269079 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:20:06.332830 kernel: SCSI subsystem initialized Feb 13 19:20:06.343765 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:20:06.365800 kernel: iscsi: registered transport (tcp) Feb 13 19:20:06.391899 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:20:06.391989 kernel: QLogic iSCSI HBA Driver Feb 13 19:20:06.445928 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:20:06.453026 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:20:06.483959 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:20:06.484028 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:20:06.487765 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:20:06.533773 kernel: raid6: sse2x4 gen() 14279 MB/s Feb 13 19:20:06.550780 kernel: raid6: sse2x2 gen() 9996 MB/s Feb 13 19:20:06.569314 kernel: raid6: sse2x1 gen() 10152 MB/s Feb 13 19:20:06.569374 kernel: raid6: using algorithm sse2x4 gen() 14279 MB/s Feb 13 19:20:06.588272 kernel: raid6: .... xor() 8070 MB/s, rmw enabled Feb 13 19:20:06.588365 kernel: raid6: using ssse3x2 recovery algorithm Feb 13 19:20:06.612788 kernel: xor: automatically using best checksumming function avx Feb 13 19:20:06.774765 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:20:06.788685 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:20:06.800996 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:20:06.819546 systemd-udevd[422]: Using default interface naming scheme 'v255'. Feb 13 19:20:06.828194 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:20:06.838935 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:20:06.856134 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Feb 13 19:20:06.894940 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:20:06.900928 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:20:07.020846 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:20:07.030951 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:20:07.054125 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:20:07.056559 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:20:07.058947 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:20:07.060538 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:20:07.066892 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:20:07.085891 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:20:07.149750 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 13 19:20:07.224150 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 19:20:07.224374 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:20:07.224397 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:20:07.224438 kernel: GPT:17805311 != 125829119 Feb 13 19:20:07.224465 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:20:07.224484 kernel: GPT:17805311 != 125829119 Feb 13 19:20:07.224500 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:20:07.224517 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:20:07.224534 kernel: libata version 3.00 loaded. Feb 13 19:20:07.224562 kernel: AVX version of gcm_enc/dec engaged. Feb 13 19:20:07.224586 kernel: AES CTR mode by8 optimization enabled Feb 13 19:20:07.199000 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:20:07.199187 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:20:07.200102 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:20:07.201810 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:20:07.201985 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:20:07.202883 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:20:07.217076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:20:07.219302 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:20:07.273768 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:20:07.353467 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:20:07.353502 kernel: BTRFS: device fsid 60f89c25-9096-4268-99ca-ef7992742f2b devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (474) Feb 13 19:20:07.353522 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:20:07.353791 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:20:07.354026 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (467) Feb 13 19:20:07.354050 kernel: ACPI: bus type USB registered Feb 13 19:20:07.354067 kernel: usbcore: registered new interface driver usbfs Feb 13 19:20:07.354096 kernel: usbcore: registered new interface driver hub Feb 13 19:20:07.354116 kernel: usbcore: registered new device driver usb Feb 13 19:20:07.354133 kernel: scsi host0: ahci Feb 13 19:20:07.354345 kernel: scsi host1: ahci Feb 13 19:20:07.354550 kernel: scsi host2: ahci Feb 13 19:20:07.354793 kernel: scsi host3: ahci Feb 13 19:20:07.354993 kernel: scsi host4: ahci Feb 13 19:20:07.355218 kernel: scsi host5: ahci Feb 13 19:20:07.355426 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Feb 13 19:20:07.355446 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Feb 13 19:20:07.355464 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Feb 13 19:20:07.355490 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Feb 13 19:20:07.355530 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Feb 13 19:20:07.355550 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Feb 13 19:20:07.355567 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 19:20:07.359387 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 13 19:20:07.359612 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 19:20:07.359907 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 19:20:07.360188 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 13 19:20:07.360452 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 13 19:20:07.360708 kernel: hub 1-0:1.0: USB hub found Feb 13 19:20:07.360983 kernel: hub 1-0:1.0: 4 ports detected Feb 13 19:20:07.361203 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 19:20:07.361422 kernel: hub 2-0:1.0: USB hub found Feb 13 19:20:07.361656 kernel: hub 2-0:1.0: 4 ports detected Feb 13 19:20:07.321669 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:20:07.422354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:20:07.452220 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:20:07.473187 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:20:07.483890 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:20:07.484709 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:20:07.496055 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:20:07.499927 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:20:07.506849 disk-uuid[566]: Primary Header is updated. Feb 13 19:20:07.506849 disk-uuid[566]: Secondary Entries is updated. Feb 13 19:20:07.506849 disk-uuid[566]: Secondary Header is updated. Feb 13 19:20:07.512841 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:20:07.545172 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:20:07.599794 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 19:20:07.666916 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:20:07.666993 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:20:07.670245 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:20:07.670281 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 19:20:07.671915 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:20:07.674155 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:20:07.746752 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:20:07.754393 kernel: usbcore: registered new interface driver usbhid Feb 13 19:20:07.754440 kernel: usbhid: USB HID core driver Feb 13 19:20:07.761863 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Feb 13 19:20:07.761918 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 13 19:20:08.528787 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:20:08.529158 disk-uuid[567]: The operation has completed successfully. Feb 13 19:20:08.586264 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:20:08.586450 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:20:08.635000 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:20:08.639394 sh[587]: Success Feb 13 19:20:08.655791 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Feb 13 19:20:08.719337 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:20:08.732896 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:20:08.736967 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:20:08.767768 kernel: BTRFS info (device dm-0): first mount of filesystem 60f89c25-9096-4268-99ca-ef7992742f2b Feb 13 19:20:08.767837 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:20:08.767857 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:20:08.769183 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:20:08.770794 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:20:08.781422 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:20:08.782849 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:20:08.791012 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:20:08.795918 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:20:08.813770 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:20:08.813833 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:20:08.813853 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:20:08.819778 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:20:08.832990 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:20:08.835169 kernel: BTRFS info (device vda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:20:08.840688 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:20:08.846958 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:20:08.954830 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:20:08.971985 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:20:09.011995 ignition[686]: Ignition 2.20.0 Feb 13 19:20:09.012865 ignition[686]: Stage: fetch-offline Feb 13 19:20:09.012927 ignition[686]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:20:09.012946 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 19:20:09.013141 ignition[686]: parsed url from cmdline: "" Feb 13 19:20:09.013149 ignition[686]: no config URL provided Feb 13 19:20:09.013159 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:20:09.013176 ignition[686]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:20:09.018192 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:20:09.013186 ignition[686]: failed to fetch config: resource requires networking Feb 13 19:20:09.019481 systemd-networkd[771]: lo: Link UP Feb 13 19:20:09.013535 ignition[686]: Ignition finished successfully Feb 13 19:20:09.019487 systemd-networkd[771]: lo: Gained carrier Feb 13 19:20:09.022718 systemd-networkd[771]: Enumeration completed Feb 13 19:20:09.023328 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:20:09.023335 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:20:09.024339 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:20:09.025217 systemd-networkd[771]: eth0: Link UP Feb 13 19:20:09.025223 systemd-networkd[771]: eth0: Gained carrier Feb 13 19:20:09.025235 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:20:09.026232 systemd[1]: Reached target network.target - Network. Feb 13 19:20:09.033922 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:20:09.040880 systemd-networkd[771]: eth0: DHCPv4 address 10.230.68.30/30, gateway 10.230.68.29 acquired from 10.230.68.29 Feb 13 19:20:09.054988 ignition[780]: Ignition 2.20.0 Feb 13 19:20:09.055010 ignition[780]: Stage: fetch Feb 13 19:20:09.055296 ignition[780]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:20:09.055317 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 19:20:09.055449 ignition[780]: parsed url from cmdline: "" Feb 13 19:20:09.055456 ignition[780]: no config URL provided Feb 13 19:20:09.055466 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:20:09.055483 ignition[780]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:20:09.055625 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 13 19:20:09.055790 ignition[780]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 13 19:20:09.055821 ignition[780]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 13 19:20:09.075649 ignition[780]: GET result: OK Feb 13 19:20:09.076327 ignition[780]: parsing config with SHA512: 2069f8891727adb8043229f2e9df74dfd4ce8f749a72f944882e55524867beeee33426d06a55d3241b5a59d54de5f2ca60a8db207565af3c048adf752b642155 Feb 13 19:20:09.084581 unknown[780]: fetched base config from "system" Feb 13 19:20:09.084599 unknown[780]: fetched base config from "system" Feb 13 19:20:09.084607 unknown[780]: fetched user config from "openstack" Feb 13 19:20:09.085665 ignition[780]: fetch: fetch complete Feb 13 19:20:09.085674 ignition[780]: fetch: fetch passed Feb 13 19:20:09.087529 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:20:09.085756 ignition[780]: Ignition finished successfully Feb 13 19:20:09.099926 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:20:09.121685 ignition[788]: Ignition 2.20.0 Feb 13 19:20:09.121706 ignition[788]: Stage: kargs Feb 13 19:20:09.121945 ignition[788]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:20:09.124320 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:20:09.121966 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 19:20:09.123013 ignition[788]: kargs: kargs passed Feb 13 19:20:09.123099 ignition[788]: Ignition finished successfully Feb 13 19:20:09.133421 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:20:09.258114 ignition[794]: Ignition 2.20.0 Feb 13 19:20:09.258156 ignition[794]: Stage: disks Feb 13 19:20:09.258526 ignition[794]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:20:09.258547 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 19:20:09.261567 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:20:09.260032 ignition[794]: disks: disks passed Feb 13 19:20:09.263057 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:20:09.260126 ignition[794]: Ignition finished successfully Feb 13 19:20:09.264489 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:20:09.265676 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:20:09.267109 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:20:09.268329 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:20:09.275987 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:20:09.298978 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 19:20:09.303074 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:20:09.777906 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:20:09.885786 kernel: EXT4-fs (vda9): mounted filesystem 157595f2-1515-4117-a2d1-73fe2ed647fc r/w with ordered data mode. Quota mode: none. Feb 13 19:20:09.886537 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:20:09.887824 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:20:09.893839 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:20:09.898369 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:20:09.899873 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:20:09.902186 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 13 19:20:09.903905 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:20:09.903950 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:20:09.910227 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:20:09.916873 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (811) Feb 13 19:20:09.916919 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:20:09.920658 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:20:09.920700 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:20:09.920433 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:20:09.939765 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:20:09.943542 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:20:10.039112 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:20:10.045473 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:20:10.053511 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:20:10.059484 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:20:10.081979 systemd-networkd[771]: eth0: Gained IPv6LL Feb 13 19:20:10.211650 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:20:10.217859 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:20:10.220920 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:20:10.236829 kernel: BTRFS info (device vda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:20:10.278752 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:20:10.289772 ignition[929]: INFO : Ignition 2.20.0 Feb 13 19:20:10.292226 ignition[929]: INFO : Stage: mount Feb 13 19:20:10.292226 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:20:10.292226 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 19:20:10.292226 ignition[929]: INFO : mount: mount passed Feb 13 19:20:10.292226 ignition[929]: INFO : Ignition finished successfully Feb 13 19:20:10.293148 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:20:10.763854 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:20:11.588056 systemd-networkd[771]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9107:24:19ff:fee6:441e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9107:24:19ff:fee6:441e/64 assigned by NDisc. Feb 13 19:20:11.588068 systemd-networkd[771]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 19:20:17.083201 coreos-metadata[813]: Feb 13 19:20:17.083 WARN failed to locate config-drive, using the metadata service API instead Feb 13 19:20:17.103705 coreos-metadata[813]: Feb 13 19:20:17.103 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 19:20:17.114986 coreos-metadata[813]: Feb 13 19:20:17.114 INFO Fetch successful Feb 13 19:20:17.116098 coreos-metadata[813]: Feb 13 19:20:17.116 INFO wrote hostname srv-g6z5b.gb1.brightbox.com to /sysroot/etc/hostname Feb 13 19:20:17.119184 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 13 19:20:17.119352 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 13 19:20:17.127872 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:20:17.137850 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:20:17.163771 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (945) Feb 13 19:20:17.171504 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:20:17.171553 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:20:17.171572 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:20:17.175745 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:20:17.179228 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:20:17.213854 ignition[963]: INFO : Ignition 2.20.0 Feb 13 19:20:17.214978 ignition[963]: INFO : Stage: files Feb 13 19:20:17.215862 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:20:17.216643 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 19:20:17.217697 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:20:17.218795 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:20:17.218795 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:20:17.222373 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:20:17.223739 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:20:17.224644 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:20:17.224323 unknown[963]: wrote ssh authorized keys file for user: core Feb 13 19:20:17.226470 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:20:17.226470 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:20:17.431459 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:20:17.832222 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:20:17.834374 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:20:17.834374 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 19:20:18.544033 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:20:18.954488 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:20:18.954488 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:20:18.957076 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:20:18.957076 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:20:18.957076 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:20:18.957076 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:20:18.957076 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:20:18.957076 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:20:18.957076 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:20:18.957076 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:20:18.965567 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:20:18.965567 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:20:18.965567 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:20:18.965567 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:20:18.965567 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 19:20:19.509149 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:20:26.369799 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:20:26.369799 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:20:26.380254 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:20:26.382051 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:20:26.382051 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:20:26.382051 ignition[963]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:20:26.382051 ignition[963]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:20:26.387386 ignition[963]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:20:26.387386 ignition[963]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:20:26.387386 ignition[963]: INFO : files: files passed Feb 13 19:20:26.387386 ignition[963]: INFO : Ignition finished successfully Feb 13 19:20:26.392592 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:20:26.405299 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:20:26.408989 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:20:26.444607 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:20:26.444829 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:20:26.458993 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:20:26.458993 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:20:26.461934 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:20:26.462977 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:20:26.464486 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:20:26.470295 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:20:26.533936 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:20:26.534155 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:20:26.536715 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:20:26.537480 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:20:26.539177 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:20:26.547068 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:20:26.567044 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:20:26.576057 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:20:26.589324 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:20:26.590310 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:20:26.591828 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:20:26.593199 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:20:26.593442 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:20:26.595119 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:20:26.598814 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:20:26.600019 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:20:26.601418 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:20:26.602964 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:20:26.604432 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:20:26.606215 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:20:26.607945 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:20:26.609561 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:20:26.611037 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:20:26.612266 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:20:26.612554 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:20:26.614051 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:20:26.614947 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:20:26.616348 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:20:26.618855 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:20:26.620715 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:20:26.621119 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:20:26.623130 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:20:26.623326 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:20:26.625102 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:20:26.625269 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:20:26.633136 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:20:26.634947 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:20:26.635638 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:20:26.647163 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:20:26.649215 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:20:26.650430 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:20:26.655135 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:20:26.656330 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:20:26.674331 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:20:26.675465 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:20:26.713852 ignition[1016]: INFO : Ignition 2.20.0 Feb 13 19:20:26.713852 ignition[1016]: INFO : Stage: umount Feb 13 19:20:26.713852 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:20:26.713852 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 19:20:26.721978 ignition[1016]: INFO : umount: umount passed Feb 13 19:20:26.721978 ignition[1016]: INFO : Ignition finished successfully Feb 13 19:20:26.717394 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:20:26.718471 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:20:26.718680 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:20:26.722429 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:20:26.722620 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:20:26.723515 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:20:26.723589 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:20:26.724993 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:20:26.725152 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:20:26.726238 systemd[1]: Stopped target network.target - Network. Feb 13 19:20:26.727497 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:20:26.727641 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:20:26.729010 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:20:26.730184 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:20:26.733899 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:20:26.734801 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:20:26.736066 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:20:26.737594 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:20:26.737694 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:20:26.739027 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:20:26.739099 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:20:26.740382 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:20:26.740481 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:20:26.741840 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:20:26.741933 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:20:26.743792 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:20:26.745659 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:20:26.748359 systemd-networkd[771]: eth0: DHCPv6 lease lost Feb 13 19:20:26.750430 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:20:26.750609 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:20:26.752141 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:20:26.752351 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:20:26.760267 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:20:26.761484 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:20:26.765391 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:20:26.765955 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:20:26.766281 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:20:26.773531 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:20:26.775355 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:20:26.775461 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:20:26.781945 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:20:26.782656 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:20:26.782852 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:20:26.783653 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:20:26.783737 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:20:26.785467 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:20:26.785568 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:20:26.786788 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:20:26.786868 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:20:26.788930 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:20:26.794247 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:20:26.794429 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:20:26.804659 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:20:26.805166 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:20:26.807662 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:20:26.808097 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:20:26.810470 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:20:26.810556 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:20:26.812679 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:20:26.812799 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:20:26.813645 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:20:26.813796 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:20:26.814636 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:20:26.814899 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:20:26.823128 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:20:26.823898 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:20:26.824006 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:20:26.824947 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:20:26.825084 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:20:26.828050 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:20:26.828209 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:20:26.828970 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:20:26.829140 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:20:26.847601 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:20:26.847872 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:20:26.853619 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:20:26.862092 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:20:26.886281 systemd[1]: Switching root. Feb 13 19:20:26.925706 systemd-journald[202]: Journal stopped Feb 13 19:20:28.612638 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Feb 13 19:20:28.612853 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:20:28.612899 kernel: SELinux: policy capability open_perms=1 Feb 13 19:20:28.612922 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:20:28.612951 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:20:28.612971 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:20:28.613008 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:20:28.613039 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:20:28.613059 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:20:28.613078 kernel: audit: type=1403 audit(1739474427.285:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:20:28.613099 systemd[1]: Successfully loaded SELinux policy in 52.389ms. Feb 13 19:20:28.613129 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.816ms. Feb 13 19:20:28.613152 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:20:28.613173 systemd[1]: Detected virtualization kvm. Feb 13 19:20:28.613208 systemd[1]: Detected architecture x86-64. Feb 13 19:20:28.613232 systemd[1]: Detected first boot. Feb 13 19:20:28.613252 systemd[1]: Hostname set to . Feb 13 19:20:28.613272 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:20:28.613293 zram_generator::config[1061]: No configuration found. Feb 13 19:20:28.613313 kernel: Guest personality initialized and is inactive Feb 13 19:20:28.613356 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 19:20:28.613379 kernel: Initialized host personality Feb 13 19:20:28.613413 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:20:28.613435 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:20:28.613457 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:20:28.613478 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:20:28.613499 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:20:28.613519 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:20:28.613549 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:20:28.613571 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:20:28.613591 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:20:28.613642 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:20:28.613666 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:20:28.613698 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:20:28.613781 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:20:28.613806 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:20:28.613826 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:20:28.613847 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:20:28.613867 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:20:28.613904 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:20:28.613953 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:20:28.613978 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:20:28.614030 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:20:28.614054 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:20:28.614075 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:20:28.614095 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:20:28.614115 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:20:28.614145 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:20:28.614205 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:20:28.614239 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:20:28.614270 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:20:28.614293 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:20:28.614330 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:20:28.614354 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:20:28.614374 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:20:28.614402 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:20:28.614423 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:20:28.614453 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:20:28.614484 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:20:28.614506 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:20:28.614527 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:20:28.614590 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:20:28.614638 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:20:28.614662 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:20:28.614683 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:20:28.614713 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:20:28.614771 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:20:28.614803 systemd[1]: Reached target machines.target - Containers. Feb 13 19:20:28.614834 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:20:28.614871 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:20:28.614896 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:20:28.614917 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:20:28.614952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:20:28.614974 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:20:28.614995 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:20:28.615016 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:20:28.615037 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:20:28.615072 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:20:28.615097 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:20:28.615117 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:20:28.615138 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:20:28.615157 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:20:28.615178 kernel: loop: module loaded Feb 13 19:20:28.615198 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:20:28.615219 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:20:28.615238 kernel: fuse: init (API version 7.39) Feb 13 19:20:28.615272 kernel: ACPI: bus type drm_connector registered Feb 13 19:20:28.615303 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:20:28.615325 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:20:28.615368 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:20:28.615392 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:20:28.615413 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:20:28.615459 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:20:28.615509 systemd[1]: Stopped verity-setup.service. Feb 13 19:20:28.615546 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:20:28.615583 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:20:28.615620 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:20:28.615651 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:20:28.615673 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:20:28.615694 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:20:28.615716 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:20:28.615765 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:20:28.615788 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:20:28.615808 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:20:28.615845 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:20:28.615902 systemd-journald[1158]: Collecting audit messages is disabled. Feb 13 19:20:28.615967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:20:28.615991 systemd-journald[1158]: Journal started Feb 13 19:20:28.616023 systemd-journald[1158]: Runtime Journal (/run/log/journal/9bf7e7b82e334da1af9d25b49f42337a) is 4.7M, max 37.9M, 33.2M free. Feb 13 19:20:28.211466 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:20:28.225465 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:20:28.226282 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:20:28.618753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:20:28.623748 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:20:28.626961 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:20:28.627287 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:20:28.629296 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:20:28.629579 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:20:28.631927 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:20:28.632210 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:20:28.633298 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:20:28.633831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:20:28.635878 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:20:28.637484 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:20:28.638847 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:20:28.640079 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:20:28.656346 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:20:28.666806 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:20:28.678874 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:20:28.681830 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:20:28.681886 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:20:28.684451 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:20:28.690918 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:20:28.694778 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:20:28.695634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:20:28.704533 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:20:28.711881 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:20:28.712651 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:20:28.714306 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:20:28.715105 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:20:28.726687 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:20:28.731415 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:20:28.735475 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:20:28.740342 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:20:28.741561 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:20:28.743293 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:20:28.782425 systemd-journald[1158]: Time spent on flushing to /var/log/journal/9bf7e7b82e334da1af9d25b49f42337a is 125.935ms for 1158 entries. Feb 13 19:20:28.782425 systemd-journald[1158]: System Journal (/var/log/journal/9bf7e7b82e334da1af9d25b49f42337a) is 8M, max 584.8M, 576.8M free. Feb 13 19:20:28.997017 systemd-journald[1158]: Received client request to flush runtime journal. Feb 13 19:20:28.997103 kernel: loop0: detected capacity change from 0 to 147912 Feb 13 19:20:28.997149 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:20:28.997184 kernel: loop1: detected capacity change from 0 to 138176 Feb 13 19:20:28.788215 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:20:28.790571 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:20:28.838062 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:20:28.901212 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:20:28.903792 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:20:28.917020 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:20:28.930220 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:20:28.984376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:20:29.000263 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:20:29.003788 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:20:29.021744 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Feb 13 19:20:29.021775 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Feb 13 19:20:29.047617 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:20:29.051738 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:20:29.062756 kernel: loop2: detected capacity change from 0 to 210664 Feb 13 19:20:29.112951 kernel: loop3: detected capacity change from 0 to 8 Feb 13 19:20:29.160959 kernel: loop4: detected capacity change from 0 to 147912 Feb 13 19:20:29.217766 kernel: loop5: detected capacity change from 0 to 138176 Feb 13 19:20:29.228765 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:20:29.256796 kernel: loop6: detected capacity change from 0 to 210664 Feb 13 19:20:29.343558 kernel: loop7: detected capacity change from 0 to 8 Feb 13 19:20:29.348469 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 13 19:20:29.350613 (sd-merge)[1227]: Merged extensions into '/usr'. Feb 13 19:20:29.359865 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:20:29.359900 systemd[1]: Reloading... Feb 13 19:20:29.621784 zram_generator::config[1255]: No configuration found. Feb 13 19:20:29.915073 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:20:30.093930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:20:30.186538 systemd[1]: Reloading finished in 825 ms. Feb 13 19:20:30.200766 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:20:30.211380 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:20:30.224030 systemd[1]: Starting ensure-sysext.service... Feb 13 19:20:30.233999 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:20:30.257779 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:20:30.261643 systemd[1]: Reload requested from client PID 1312 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:20:30.261669 systemd[1]: Reloading... Feb 13 19:20:30.276132 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:20:30.277402 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:20:30.279636 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:20:30.280148 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Feb 13 19:20:30.280279 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Feb 13 19:20:30.288999 systemd-tmpfiles[1313]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:20:30.289197 systemd-tmpfiles[1313]: Skipping /boot Feb 13 19:20:30.313992 systemd-tmpfiles[1313]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:20:30.314213 systemd-tmpfiles[1313]: Skipping /boot Feb 13 19:20:30.363129 zram_generator::config[1342]: No configuration found. Feb 13 19:20:30.544089 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:20:30.636616 systemd[1]: Reloading finished in 373 ms. Feb 13 19:20:30.665010 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:20:30.687231 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:20:30.698032 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:20:30.703624 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:20:30.714489 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:20:30.729352 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:20:30.742502 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:20:30.750549 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:20:30.750898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:20:30.757295 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:20:30.767079 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:20:30.769561 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:20:30.770504 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:20:30.770680 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:20:30.774035 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:20:30.786299 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:20:30.796461 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:20:30.796854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:20:30.797193 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:20:30.797365 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:20:30.797494 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:20:30.798601 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:20:30.811923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:20:30.812276 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:20:30.827280 augenrules[1430]: No rules Feb 13 19:20:30.828719 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:20:30.830331 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:20:30.831966 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:20:30.833284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:20:30.833551 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:20:30.837388 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:20:30.837657 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:20:30.851819 systemd[1]: Finished ensure-sysext.service. Feb 13 19:20:30.855212 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:20:30.855508 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:20:30.865010 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:20:30.873963 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:20:30.874961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:20:30.875028 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:20:30.875161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:20:30.885017 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:20:30.891931 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:20:30.892632 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:20:30.894803 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:20:30.898154 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:20:30.898428 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:20:30.902349 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:20:30.903877 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:20:30.904128 systemd-udevd[1409]: Using default interface naming scheme 'v255'. Feb 13 19:20:30.907151 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:20:30.907209 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:20:30.922227 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:20:30.943567 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:20:30.957971 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:20:30.974152 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:20:31.071381 systemd-networkd[1458]: lo: Link UP Feb 13 19:20:31.071396 systemd-networkd[1458]: lo: Gained carrier Feb 13 19:20:31.072614 systemd-networkd[1458]: Enumeration completed Feb 13 19:20:31.072848 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:20:31.080919 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:20:31.091976 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:20:31.138845 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:20:31.151604 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:20:31.163263 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:20:31.175233 systemd-resolved[1403]: Positive Trust Anchors: Feb 13 19:20:31.175267 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:20:31.175311 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:20:31.185837 systemd-resolved[1403]: Using system hostname 'srv-g6z5b.gb1.brightbox.com'. Feb 13 19:20:31.188800 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:20:31.191003 systemd[1]: Reached target network.target - Network. Feb 13 19:20:31.191665 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:20:31.233874 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:20:31.296313 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1462) Feb 13 19:20:31.310491 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:20:31.322818 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:20:31.323596 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:20:31.323609 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:20:31.327968 systemd-networkd[1458]: eth0: Link UP Feb 13 19:20:31.327981 systemd-networkd[1458]: eth0: Gained carrier Feb 13 19:20:31.328007 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:20:31.340917 systemd-networkd[1458]: eth0: DHCPv4 address 10.230.68.30/30, gateway 10.230.68.29 acquired from 10.230.68.29 Feb 13 19:20:31.344591 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Feb 13 19:20:31.363180 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:20:31.414804 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:20:31.425804 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:20:31.431957 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:20:31.495554 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 19:20:31.495806 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:20:31.505018 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:20:31.505312 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:20:31.564089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:20:32.659603 systemd-timesyncd[1442]: Contacted time server 217.144.90.26:123 (0.flatcar.pool.ntp.org). Feb 13 19:20:32.659726 systemd-timesyncd[1442]: Initial clock synchronization to Thu 2025-02-13 19:20:32.659424 UTC. Feb 13 19:20:32.661750 systemd-resolved[1403]: Clock change detected. Flushing caches. Feb 13 19:20:32.773457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:20:32.834485 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:20:32.854261 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:20:32.887441 lvm[1495]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:20:32.924624 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:20:32.926429 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:20:32.927181 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:20:32.928071 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:20:32.928887 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:20:32.930248 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:20:32.931076 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:20:32.931828 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:20:32.932529 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:20:32.932581 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:20:32.933796 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:20:32.936459 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:20:32.939384 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:20:32.945141 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:20:32.946195 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:20:32.946973 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:20:32.950403 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:20:32.951694 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:20:32.954217 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:20:32.955792 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:20:32.956602 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:20:32.957257 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:20:32.957969 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:20:32.958027 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:20:32.968851 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:20:32.974065 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:20:32.975968 lvm[1499]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:20:32.979954 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:20:32.983912 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:20:32.989010 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:20:32.989696 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:20:32.998036 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:20:33.010939 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:20:33.023509 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:20:33.054582 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:20:33.070370 jq[1503]: false Feb 13 19:20:33.069023 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:20:33.071465 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:20:33.072307 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:20:33.078071 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:20:33.082918 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:20:33.087669 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:20:33.094487 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:20:33.095835 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:20:33.097292 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:20:33.097640 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:20:33.119240 jq[1516]: true Feb 13 19:20:33.117552 dbus-daemon[1502]: [system] SELinux support is enabled Feb 13 19:20:33.131098 extend-filesystems[1504]: Found loop4 Feb 13 19:20:33.131098 extend-filesystems[1504]: Found loop5 Feb 13 19:20:33.131098 extend-filesystems[1504]: Found loop6 Feb 13 19:20:33.131098 extend-filesystems[1504]: Found loop7 Feb 13 19:20:33.131098 extend-filesystems[1504]: Found vda Feb 13 19:20:33.131098 extend-filesystems[1504]: Found vda1 Feb 13 19:20:33.131098 extend-filesystems[1504]: Found vda2 Feb 13 19:20:33.131098 extend-filesystems[1504]: Found vda3 Feb 13 19:20:33.131098 extend-filesystems[1504]: Found usr Feb 13 19:20:33.131098 extend-filesystems[1504]: Found vda4 Feb 13 19:20:33.131098 extend-filesystems[1504]: Found vda6 Feb 13 19:20:33.131098 extend-filesystems[1504]: Found vda7 Feb 13 19:20:33.131098 extend-filesystems[1504]: Found vda9 Feb 13 19:20:33.131098 extend-filesystems[1504]: Checking size of /dev/vda9 Feb 13 19:20:33.125481 dbus-daemon[1502]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1458 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:20:33.134796 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:20:33.157494 dbus-daemon[1502]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:20:33.142946 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:20:33.202178 tar[1520]: linux-amd64/helm Feb 13 19:20:33.142994 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:20:33.202747 jq[1531]: true Feb 13 19:20:33.145224 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:20:33.145255 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:20:33.156502 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:20:33.158937 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:20:33.181577 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:20:33.196365 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:20:33.212782 update_engine[1515]: I20250213 19:20:33.207543 1515 main.cc:92] Flatcar Update Engine starting Feb 13 19:20:33.213109 extend-filesystems[1504]: Resized partition /dev/vda9 Feb 13 19:20:33.230807 extend-filesystems[1545]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:20:33.232864 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:20:33.236972 update_engine[1515]: I20250213 19:20:33.236020 1515 update_check_scheduler.cc:74] Next update check in 9m6s Feb 13 19:20:33.239847 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:20:33.255241 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 13 19:20:33.252045 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:20:33.497970 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1459) Feb 13 19:20:33.507675 systemd-logind[1514]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 19:20:33.515243 bash[1560]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:20:33.507739 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:20:33.513617 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:20:33.514140 systemd-logind[1514]: New seat seat0. Feb 13 19:20:33.524574 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:20:33.569114 systemd[1]: Starting sshkeys.service... Feb 13 19:20:33.594791 sshd_keygen[1538]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:20:33.700885 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:20:33.710758 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:20:33.720235 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:20:33.746244 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:20:33.763288 systemd[1]: Started sshd@0-10.230.68.30:22-139.178.89.65:38920.service - OpenSSH per-connection server daemon (139.178.89.65:38920). Feb 13 19:20:33.774854 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:20:33.780614 dbus-daemon[1502]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:20:33.781707 locksmithd[1547]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:20:33.781942 dbus-daemon[1502]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1539 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:20:33.787411 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:20:33.787782 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:20:33.809494 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:20:33.819222 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:20:33.827788 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 19:20:33.922892 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:20:33.935096 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:20:33.926062 polkitd[1589]: Started polkitd version 121 Feb 13 19:20:33.947432 extend-filesystems[1545]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:20:33.947432 extend-filesystems[1545]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 19:20:33.947432 extend-filesystems[1545]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 19:20:33.946029 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:20:33.960090 extend-filesystems[1504]: Resized filesystem in /dev/vda9 Feb 13 19:20:33.948468 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:20:33.955447 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:20:33.955855 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:20:33.968542 polkitd[1589]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:20:33.968662 polkitd[1589]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:20:33.971559 polkitd[1589]: Finished loading, compiling and executing 2 rules Feb 13 19:20:33.972236 dbus-daemon[1502]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:20:33.972481 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:20:33.974197 polkitd[1589]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:20:34.010300 systemd-hostnamed[1539]: Hostname set to (static) Feb 13 19:20:34.046491 containerd[1536]: time="2025-02-13T19:20:34.045814431Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:20:34.081266 containerd[1536]: time="2025-02-13T19:20:34.080887722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:20:34.092807 containerd[1536]: time="2025-02-13T19:20:34.092696378Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:20:34.092807 containerd[1536]: time="2025-02-13T19:20:34.092756744Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:20:34.093834 containerd[1536]: time="2025-02-13T19:20:34.093043683Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:20:34.093834 containerd[1536]: time="2025-02-13T19:20:34.093416704Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:20:34.093834 containerd[1536]: time="2025-02-13T19:20:34.093446179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:20:34.093834 containerd[1536]: time="2025-02-13T19:20:34.093564005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:20:34.093834 containerd[1536]: time="2025-02-13T19:20:34.093587761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:20:34.094194 containerd[1536]: time="2025-02-13T19:20:34.094164240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:20:34.094285 containerd[1536]: time="2025-02-13T19:20:34.094262782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:20:34.094408 containerd[1536]: time="2025-02-13T19:20:34.094381701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:20:34.094492 containerd[1536]: time="2025-02-13T19:20:34.094471423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:20:34.094795 containerd[1536]: time="2025-02-13T19:20:34.094748588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:20:34.095245 containerd[1536]: time="2025-02-13T19:20:34.095218216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:20:34.095490 containerd[1536]: time="2025-02-13T19:20:34.095461427Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:20:34.095593 containerd[1536]: time="2025-02-13T19:20:34.095570929Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:20:34.095976 containerd[1536]: time="2025-02-13T19:20:34.095830777Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:20:34.095976 containerd[1536]: time="2025-02-13T19:20:34.095918402Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:20:34.101257 containerd[1536]: time="2025-02-13T19:20:34.100526410Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:20:34.101257 containerd[1536]: time="2025-02-13T19:20:34.100605005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:20:34.101257 containerd[1536]: time="2025-02-13T19:20:34.100633461Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:20:34.101257 containerd[1536]: time="2025-02-13T19:20:34.100665175Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:20:34.101257 containerd[1536]: time="2025-02-13T19:20:34.100702084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:20:34.101257 containerd[1536]: time="2025-02-13T19:20:34.100940525Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:20:34.101571 containerd[1536]: time="2025-02-13T19:20:34.101543477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:20:34.101886 containerd[1536]: time="2025-02-13T19:20:34.101859439Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:20:34.101990 containerd[1536]: time="2025-02-13T19:20:34.101965558Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:20:34.102085 containerd[1536]: time="2025-02-13T19:20:34.102061600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:20:34.102178 containerd[1536]: time="2025-02-13T19:20:34.102155805Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:20:34.102300 containerd[1536]: time="2025-02-13T19:20:34.102275180Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:20:34.102422 containerd[1536]: time="2025-02-13T19:20:34.102398615Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:20:34.102530 containerd[1536]: time="2025-02-13T19:20:34.102506770Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:20:34.102625 containerd[1536]: time="2025-02-13T19:20:34.102602491Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.102706294Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.102738060Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.102757778Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.102810172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.102836358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.102863439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.102885211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.102905155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.102926932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.102947056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.102979043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.103009456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.103033446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.103792 containerd[1536]: time="2025-02-13T19:20:34.103052801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103074525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103093910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103115809Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103151936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103177014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103195270Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103265676Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103293936Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103311119Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103332701Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103348932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103372247Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103396380Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:20:34.104317 containerd[1536]: time="2025-02-13T19:20:34.103421254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:20:34.104947 containerd[1536]: time="2025-02-13T19:20:34.104873301Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:20:34.105788 containerd[1536]: time="2025-02-13T19:20:34.105357240Z" level=info msg="Connect containerd service" Feb 13 19:20:34.105788 containerd[1536]: time="2025-02-13T19:20:34.105413941Z" level=info msg="using legacy CRI server" Feb 13 19:20:34.105788 containerd[1536]: time="2025-02-13T19:20:34.105431851Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:20:34.105788 containerd[1536]: time="2025-02-13T19:20:34.105571021Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:20:34.106800 containerd[1536]: time="2025-02-13T19:20:34.106752262Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:20:34.107329 containerd[1536]: time="2025-02-13T19:20:34.107267890Z" level=info msg="Start subscribing containerd event" Feb 13 19:20:34.107396 containerd[1536]: time="2025-02-13T19:20:34.107342202Z" level=info msg="Start recovering state" Feb 13 19:20:34.107462 containerd[1536]: time="2025-02-13T19:20:34.107439682Z" level=info msg="Start event monitor" Feb 13 19:20:34.107506 containerd[1536]: time="2025-02-13T19:20:34.107468346Z" level=info msg="Start snapshots syncer" Feb 13 19:20:34.107506 containerd[1536]: time="2025-02-13T19:20:34.107486854Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:20:34.107506 containerd[1536]: time="2025-02-13T19:20:34.107502442Z" level=info msg="Start streaming server" Feb 13 19:20:34.109321 containerd[1536]: time="2025-02-13T19:20:34.107738540Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:20:34.109321 containerd[1536]: time="2025-02-13T19:20:34.107848931Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:20:34.109321 containerd[1536]: time="2025-02-13T19:20:34.107941599Z" level=info msg="containerd successfully booted in 0.063505s" Feb 13 19:20:34.108069 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:20:34.127077 systemd-networkd[1458]: eth0: Gained IPv6LL Feb 13 19:20:34.131207 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:20:34.134140 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:20:34.145519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:20:34.154051 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:20:34.277345 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:20:34.550384 tar[1520]: linux-amd64/LICENSE Feb 13 19:20:34.550384 tar[1520]: linux-amd64/README.md Feb 13 19:20:34.576818 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:20:34.837938 sshd[1585]: Accepted publickey for core from 139.178.89.65 port 38920 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:20:34.840518 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:34.860496 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:20:34.874018 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:20:34.878194 systemd-logind[1514]: New session 1 of user core. Feb 13 19:20:34.895484 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:20:34.905929 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:20:34.933929 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:20:34.938454 systemd-logind[1514]: New session c1 of user core. Feb 13 19:20:35.165638 systemd[1626]: Queued start job for default target default.target. Feb 13 19:20:35.172369 systemd[1626]: Created slice app.slice - User Application Slice. Feb 13 19:20:35.172436 systemd[1626]: Reached target paths.target - Paths. Feb 13 19:20:35.172636 systemd[1626]: Reached target timers.target - Timers. Feb 13 19:20:35.175084 systemd[1626]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:20:35.212190 systemd[1626]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:20:35.212539 systemd[1626]: Reached target sockets.target - Sockets. Feb 13 19:20:35.212791 systemd[1626]: Reached target basic.target - Basic System. Feb 13 19:20:35.212902 systemd[1626]: Reached target default.target - Main User Target. Feb 13 19:20:35.212973 systemd[1626]: Startup finished in 259ms. Feb 13 19:20:35.213487 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:20:35.223233 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:20:35.536995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:20:35.549773 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:20:35.634901 systemd-networkd[1458]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9107:24:19ff:fee6:441e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9107:24:19ff:fee6:441e/64 assigned by NDisc. Feb 13 19:20:35.634916 systemd-networkd[1458]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 19:20:35.864338 systemd[1]: Started sshd@1-10.230.68.30:22-139.178.89.65:57958.service - OpenSSH per-connection server daemon (139.178.89.65:57958). Feb 13 19:20:36.325935 kubelet[1640]: E0213 19:20:36.325545 1640 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:20:36.328723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:20:36.329061 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:20:36.329822 systemd[1]: kubelet.service: Consumed 1.498s CPU time, 245.9M memory peak. Feb 13 19:20:36.759495 sshd[1648]: Accepted publickey for core from 139.178.89.65 port 57958 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:20:36.761779 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:36.771252 systemd-logind[1514]: New session 2 of user core. Feb 13 19:20:36.779217 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:20:37.376846 sshd[1654]: Connection closed by 139.178.89.65 port 57958 Feb 13 19:20:37.377813 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:37.383041 systemd[1]: sshd@1-10.230.68.30:22-139.178.89.65:57958.service: Deactivated successfully. Feb 13 19:20:37.385645 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:20:37.386894 systemd-logind[1514]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:20:37.388706 systemd-logind[1514]: Removed session 2. Feb 13 19:20:37.549318 systemd[1]: Started sshd@2-10.230.68.30:22-139.178.89.65:57962.service - OpenSSH per-connection server daemon (139.178.89.65:57962). Feb 13 19:20:38.446522 sshd[1661]: Accepted publickey for core from 139.178.89.65 port 57962 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:20:38.448662 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:38.456945 systemd-logind[1514]: New session 3 of user core. Feb 13 19:20:38.467054 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:20:39.046521 login[1594]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 19:20:39.051473 login[1597]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 19:20:39.057318 systemd-logind[1514]: New session 4 of user core. Feb 13 19:20:39.067948 sshd[1663]: Connection closed by 139.178.89.65 port 57962 Feb 13 19:20:39.068499 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:39.069984 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:20:39.075579 systemd-logind[1514]: New session 5 of user core. Feb 13 19:20:39.085124 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:20:39.091022 systemd[1]: sshd@2-10.230.68.30:22-139.178.89.65:57962.service: Deactivated successfully. Feb 13 19:20:39.094229 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:20:39.096426 systemd-logind[1514]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:20:39.099964 systemd-logind[1514]: Removed session 3. Feb 13 19:20:40.297665 coreos-metadata[1501]: Feb 13 19:20:40.297 WARN failed to locate config-drive, using the metadata service API instead Feb 13 19:20:40.322043 coreos-metadata[1501]: Feb 13 19:20:40.322 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 13 19:20:40.329080 coreos-metadata[1501]: Feb 13 19:20:40.329 INFO Fetch failed with 404: resource not found Feb 13 19:20:40.329258 coreos-metadata[1501]: Feb 13 19:20:40.329 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 19:20:40.329738 coreos-metadata[1501]: Feb 13 19:20:40.329 INFO Fetch successful Feb 13 19:20:40.329906 coreos-metadata[1501]: Feb 13 19:20:40.329 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 13 19:20:40.342322 coreos-metadata[1501]: Feb 13 19:20:40.342 INFO Fetch successful Feb 13 19:20:40.342543 coreos-metadata[1501]: Feb 13 19:20:40.342 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 13 19:20:40.354928 coreos-metadata[1501]: Feb 13 19:20:40.354 INFO Fetch successful Feb 13 19:20:40.355171 coreos-metadata[1501]: Feb 13 19:20:40.355 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 13 19:20:40.365898 coreos-metadata[1501]: Feb 13 19:20:40.365 INFO Fetch successful Feb 13 19:20:40.366049 coreos-metadata[1501]: Feb 13 19:20:40.366 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 13 19:20:40.382587 coreos-metadata[1501]: Feb 13 19:20:40.382 INFO Fetch successful Feb 13 19:20:40.405206 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:20:40.406504 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:20:40.835361 coreos-metadata[1579]: Feb 13 19:20:40.835 WARN failed to locate config-drive, using the metadata service API instead Feb 13 19:20:40.856692 coreos-metadata[1579]: Feb 13 19:20:40.856 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 13 19:20:40.879905 coreos-metadata[1579]: Feb 13 19:20:40.879 INFO Fetch successful Feb 13 19:20:40.880226 coreos-metadata[1579]: Feb 13 19:20:40.880 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:20:40.918711 coreos-metadata[1579]: Feb 13 19:20:40.918 INFO Fetch successful Feb 13 19:20:40.922438 unknown[1579]: wrote ssh authorized keys file for user: core Feb 13 19:20:40.943800 update-ssh-keys[1703]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:20:40.945864 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:20:40.949444 systemd[1]: Finished sshkeys.service. Feb 13 19:20:40.951805 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:20:40.952286 systemd[1]: Startup finished in 1.647s (kernel) + 21.527s (initrd) + 12.713s (userspace) = 35.887s. Feb 13 19:20:46.332226 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:20:46.344057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:20:46.609512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:20:46.620224 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:20:46.690782 kubelet[1715]: E0213 19:20:46.690690 1715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:20:46.695661 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:20:46.695931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:20:46.696636 systemd[1]: kubelet.service: Consumed 308ms CPU time, 97M memory peak. Feb 13 19:20:49.230095 systemd[1]: Started sshd@3-10.230.68.30:22-139.178.89.65:49800.service - OpenSSH per-connection server daemon (139.178.89.65:49800). Feb 13 19:20:50.128320 sshd[1723]: Accepted publickey for core from 139.178.89.65 port 49800 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:20:50.130347 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:50.139359 systemd-logind[1514]: New session 6 of user core. Feb 13 19:20:50.146085 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:20:50.751842 sshd[1725]: Connection closed by 139.178.89.65 port 49800 Feb 13 19:20:50.751664 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:50.757916 systemd[1]: sshd@3-10.230.68.30:22-139.178.89.65:49800.service: Deactivated successfully. Feb 13 19:20:50.761807 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:20:50.763714 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:20:50.765193 systemd-logind[1514]: Removed session 6. Feb 13 19:20:50.920281 systemd[1]: Started sshd@4-10.230.68.30:22-139.178.89.65:49810.service - OpenSSH per-connection server daemon (139.178.89.65:49810). Feb 13 19:20:51.812005 sshd[1731]: Accepted publickey for core from 139.178.89.65 port 49810 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:20:51.813914 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:51.820593 systemd-logind[1514]: New session 7 of user core. Feb 13 19:20:51.828016 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:20:52.428802 sshd[1733]: Connection closed by 139.178.89.65 port 49810 Feb 13 19:20:52.428009 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:52.434123 systemd[1]: sshd@4-10.230.68.30:22-139.178.89.65:49810.service: Deactivated successfully. Feb 13 19:20:52.436971 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:20:52.438074 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:20:52.439566 systemd-logind[1514]: Removed session 7. Feb 13 19:20:52.586142 systemd[1]: Started sshd@5-10.230.68.30:22-139.178.89.65:49812.service - OpenSSH per-connection server daemon (139.178.89.65:49812). Feb 13 19:20:53.480453 sshd[1739]: Accepted publickey for core from 139.178.89.65 port 49812 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:20:53.482719 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:53.489150 systemd-logind[1514]: New session 8 of user core. Feb 13 19:20:53.499989 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:20:54.103362 sshd[1741]: Connection closed by 139.178.89.65 port 49812 Feb 13 19:20:54.104251 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:54.108798 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:20:54.109147 systemd[1]: sshd@5-10.230.68.30:22-139.178.89.65:49812.service: Deactivated successfully. Feb 13 19:20:54.111245 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:20:54.113256 systemd-logind[1514]: Removed session 8. Feb 13 19:20:54.261117 systemd[1]: Started sshd@6-10.230.68.30:22-139.178.89.65:49814.service - OpenSSH per-connection server daemon (139.178.89.65:49814). Feb 13 19:20:55.151319 sshd[1747]: Accepted publickey for core from 139.178.89.65 port 49814 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:20:55.153164 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:55.161848 systemd-logind[1514]: New session 9 of user core. Feb 13 19:20:55.167003 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:20:55.638610 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:20:55.639119 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:20:55.655983 sudo[1750]: pam_unix(sudo:session): session closed for user root Feb 13 19:20:55.800719 sshd[1749]: Connection closed by 139.178.89.65 port 49814 Feb 13 19:20:55.799688 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:55.803510 systemd[1]: sshd@6-10.230.68.30:22-139.178.89.65:49814.service: Deactivated successfully. Feb 13 19:20:55.805696 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:20:55.807832 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:20:55.809236 systemd-logind[1514]: Removed session 9. Feb 13 19:20:55.968293 systemd[1]: Started sshd@7-10.230.68.30:22-139.178.89.65:59900.service - OpenSSH per-connection server daemon (139.178.89.65:59900). Feb 13 19:20:56.703375 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:20:56.711015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:20:56.857607 sshd[1756]: Accepted publickey for core from 139.178.89.65 port 59900 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:20:56.861218 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:56.872634 systemd-logind[1514]: New session 10 of user core. Feb 13 19:20:56.877024 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:20:56.930006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:20:56.932528 (kubelet)[1767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:20:57.032661 kubelet[1767]: E0213 19:20:57.032383 1767 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:20:57.035887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:20:57.036154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:20:57.037256 systemd[1]: kubelet.service: Consumed 217ms CPU time, 96.2M memory peak. Feb 13 19:20:57.335832 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:20:57.336804 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:20:57.342321 sudo[1775]: pam_unix(sudo:session): session closed for user root Feb 13 19:20:57.350592 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:20:57.351073 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:20:57.379258 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:20:57.417649 augenrules[1797]: No rules Feb 13 19:20:57.419369 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:20:57.419839 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:20:57.421038 sudo[1774]: pam_unix(sudo:session): session closed for user root Feb 13 19:20:57.563983 sshd[1761]: Connection closed by 139.178.89.65 port 59900 Feb 13 19:20:57.564516 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:57.569423 systemd[1]: sshd@7-10.230.68.30:22-139.178.89.65:59900.service: Deactivated successfully. Feb 13 19:20:57.571917 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:20:57.573839 systemd-logind[1514]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:20:57.575496 systemd-logind[1514]: Removed session 10. Feb 13 19:20:57.727954 systemd[1]: Started sshd@8-10.230.68.30:22-139.178.89.65:59906.service - OpenSSH per-connection server daemon (139.178.89.65:59906). Feb 13 19:20:58.637203 sshd[1806]: Accepted publickey for core from 139.178.89.65 port 59906 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:20:58.639103 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:58.646444 systemd-logind[1514]: New session 11 of user core. Feb 13 19:20:58.653031 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:20:59.115985 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:20:59.116439 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:20:59.714390 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:20:59.714471 (dockerd)[1825]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:21:00.338829 dockerd[1825]: time="2025-02-13T19:21:00.338543792Z" level=info msg="Starting up" Feb 13 19:21:00.494498 dockerd[1825]: time="2025-02-13T19:21:00.494447402Z" level=info msg="Loading containers: start." Feb 13 19:21:00.717932 kernel: Initializing XFRM netlink socket Feb 13 19:21:00.821490 systemd-networkd[1458]: docker0: Link UP Feb 13 19:21:00.859560 dockerd[1825]: time="2025-02-13T19:21:00.859486908Z" level=info msg="Loading containers: done." Feb 13 19:21:00.877825 dockerd[1825]: time="2025-02-13T19:21:00.877701886Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:21:00.878004 dockerd[1825]: time="2025-02-13T19:21:00.877832661Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:21:00.878004 dockerd[1825]: time="2025-02-13T19:21:00.877980006Z" level=info msg="Daemon has completed initialization" Feb 13 19:21:00.915953 dockerd[1825]: time="2025-02-13T19:21:00.915481794Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:21:00.915662 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:21:02.368803 containerd[1536]: time="2025-02-13T19:21:02.367072327Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:21:03.487314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1237856967.mount: Deactivated successfully. Feb 13 19:21:05.686361 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:21:06.607130 containerd[1536]: time="2025-02-13T19:21:06.606012279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:06.607130 containerd[1536]: time="2025-02-13T19:21:06.606252763Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678222" Feb 13 19:21:06.609529 containerd[1536]: time="2025-02-13T19:21:06.609463087Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:06.614566 containerd[1536]: time="2025-02-13T19:21:06.614494289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:06.616992 containerd[1536]: time="2025-02-13T19:21:06.616065648Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 4.248819966s" Feb 13 19:21:06.616992 containerd[1536]: time="2025-02-13T19:21:06.616180159Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 19:21:06.650143 containerd[1536]: time="2025-02-13T19:21:06.650083868Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:21:07.083891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:21:07.095095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:07.419040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:07.436231 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:21:07.555003 kubelet[2093]: E0213 19:21:07.554906 2093 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:21:07.558624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:21:07.558943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:21:07.559556 systemd[1]: kubelet.service: Consumed 387ms CPU time, 96.2M memory peak. Feb 13 19:21:09.477213 systemd[1]: Started sshd@9-10.230.68.30:22-157.230.245.72:44632.service - OpenSSH per-connection server daemon (157.230.245.72:44632). Feb 13 19:21:09.750992 containerd[1536]: time="2025-02-13T19:21:09.750533819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:09.752380 containerd[1536]: time="2025-02-13T19:21:09.752334976Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611553" Feb 13 19:21:09.753570 containerd[1536]: time="2025-02-13T19:21:09.753502049Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:09.758327 containerd[1536]: time="2025-02-13T19:21:09.757895294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:09.759927 containerd[1536]: time="2025-02-13T19:21:09.759680086Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 3.109290795s" Feb 13 19:21:09.759927 containerd[1536]: time="2025-02-13T19:21:09.759733025Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 19:21:09.847597 containerd[1536]: time="2025-02-13T19:21:09.847431590Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:21:10.457874 sshd[2108]: Connection closed by authenticating user root 157.230.245.72 port 44632 [preauth] Feb 13 19:21:10.462208 systemd[1]: sshd@9-10.230.68.30:22-157.230.245.72:44632.service: Deactivated successfully. Feb 13 19:21:11.865163 containerd[1536]: time="2025-02-13T19:21:11.865085734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:11.866740 containerd[1536]: time="2025-02-13T19:21:11.866696259Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782138" Feb 13 19:21:11.867580 containerd[1536]: time="2025-02-13T19:21:11.867512402Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:11.872431 containerd[1536]: time="2025-02-13T19:21:11.872363245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:11.874647 containerd[1536]: time="2025-02-13T19:21:11.874163332Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 2.026667647s" Feb 13 19:21:11.874647 containerd[1536]: time="2025-02-13T19:21:11.874208786Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 19:21:11.907425 containerd[1536]: time="2025-02-13T19:21:11.907355872Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:21:13.564103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546870174.mount: Deactivated successfully. Feb 13 19:21:14.319796 containerd[1536]: time="2025-02-13T19:21:14.318921607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:14.320376 containerd[1536]: time="2025-02-13T19:21:14.319947085Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057866" Feb 13 19:21:14.321183 containerd[1536]: time="2025-02-13T19:21:14.321115616Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:14.324490 containerd[1536]: time="2025-02-13T19:21:14.324412378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:14.326143 containerd[1536]: time="2025-02-13T19:21:14.325525684Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.418108931s" Feb 13 19:21:14.326143 containerd[1536]: time="2025-02-13T19:21:14.325570213Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 19:21:14.356449 containerd[1536]: time="2025-02-13T19:21:14.356396557Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:21:15.008224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1614933579.mount: Deactivated successfully. Feb 13 19:21:16.271833 containerd[1536]: time="2025-02-13T19:21:16.270554589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:16.272865 containerd[1536]: time="2025-02-13T19:21:16.272534734Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Feb 13 19:21:16.273641 containerd[1536]: time="2025-02-13T19:21:16.273603498Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:16.278125 containerd[1536]: time="2025-02-13T19:21:16.278087888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:16.280311 containerd[1536]: time="2025-02-13T19:21:16.280269928Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.923510849s" Feb 13 19:21:16.280460 containerd[1536]: time="2025-02-13T19:21:16.280429270Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:21:16.312969 containerd[1536]: time="2025-02-13T19:21:16.312914815Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:21:16.881126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708473501.mount: Deactivated successfully. Feb 13 19:21:16.905033 containerd[1536]: time="2025-02-13T19:21:16.904907312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:16.906031 containerd[1536]: time="2025-02-13T19:21:16.905927101Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Feb 13 19:21:16.907411 containerd[1536]: time="2025-02-13T19:21:16.907347720Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:16.910248 containerd[1536]: time="2025-02-13T19:21:16.910192157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:16.911439 containerd[1536]: time="2025-02-13T19:21:16.911277389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 598.312ms" Feb 13 19:21:16.911439 containerd[1536]: time="2025-02-13T19:21:16.911316566Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 19:21:16.957029 containerd[1536]: time="2025-02-13T19:21:16.956866052Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:21:17.580134 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 19:21:17.589922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:17.601944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2919221935.mount: Deactivated successfully. Feb 13 19:21:17.877296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:17.883399 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:21:17.982552 kubelet[2210]: E0213 19:21:17.982347 2210 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:21:17.986960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:21:17.987278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:21:17.988089 systemd[1]: kubelet.service: Consumed 205ms CPU time, 97.2M memory peak. Feb 13 19:21:18.077809 update_engine[1515]: I20250213 19:21:18.077006 1515 update_attempter.cc:509] Updating boot flags... Feb 13 19:21:18.270947 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2231) Feb 13 19:21:18.510821 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2231) Feb 13 19:21:22.907021 containerd[1536]: time="2025-02-13T19:21:22.906910857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:22.908692 containerd[1536]: time="2025-02-13T19:21:22.908610511Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Feb 13 19:21:22.909612 containerd[1536]: time="2025-02-13T19:21:22.909554720Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:22.913631 containerd[1536]: time="2025-02-13T19:21:22.913559617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:22.915489 containerd[1536]: time="2025-02-13T19:21:22.915284940Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.958369306s" Feb 13 19:21:22.915489 containerd[1536]: time="2025-02-13T19:21:22.915329045Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 19:21:27.052886 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:27.053258 systemd[1]: kubelet.service: Consumed 205ms CPU time, 97.2M memory peak. Feb 13 19:21:27.067150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:27.101969 systemd[1]: Reload requested from client PID 2334 ('systemctl') (unit session-11.scope)... Feb 13 19:21:27.102057 systemd[1]: Reloading... Feb 13 19:21:27.315809 zram_generator::config[2381]: No configuration found. Feb 13 19:21:27.503884 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:21:27.654058 systemd[1]: Reloading finished in 550 ms. Feb 13 19:21:27.797097 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:21:27.797260 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:21:27.797711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:27.797818 systemd[1]: kubelet.service: Consumed 246ms CPU time, 82.1M memory peak. Feb 13 19:21:27.805253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:27.983647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:27.998317 (kubelet)[2445]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:21:28.075587 kubelet[2445]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:21:28.075587 kubelet[2445]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:21:28.075587 kubelet[2445]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:21:28.076265 kubelet[2445]: I0213 19:21:28.076035 2445 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:21:28.558989 kubelet[2445]: I0213 19:21:28.558913 2445 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:21:28.558989 kubelet[2445]: I0213 19:21:28.558961 2445 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:21:28.559327 kubelet[2445]: I0213 19:21:28.559231 2445 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:21:28.589607 kubelet[2445]: E0213 19:21:28.589015 2445 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.68.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:28.589607 kubelet[2445]: I0213 19:21:28.589321 2445 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:21:28.607073 kubelet[2445]: I0213 19:21:28.606922 2445 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:21:28.607826 kubelet[2445]: I0213 19:21:28.607607 2445 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:21:28.609147 kubelet[2445]: I0213 19:21:28.607654 2445 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-g6z5b.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:21:28.609841 kubelet[2445]: I0213 19:21:28.609785 2445 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:21:28.609841 kubelet[2445]: I0213 19:21:28.609818 2445 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:21:28.611685 kubelet[2445]: I0213 19:21:28.611619 2445 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:21:28.613455 kubelet[2445]: W0213 19:21:28.613369 2445 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.68.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-g6z5b.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:28.613672 kubelet[2445]: E0213 19:21:28.613633 2445 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.68.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-g6z5b.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:28.614186 kubelet[2445]: I0213 19:21:28.614153 2445 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:21:28.614186 kubelet[2445]: I0213 19:21:28.614186 2445 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:21:28.614370 kubelet[2445]: I0213 19:21:28.614253 2445 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:21:28.614370 kubelet[2445]: I0213 19:21:28.614304 2445 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:21:28.618404 kubelet[2445]: W0213 19:21:28.617430 2445 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.68.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:28.618404 kubelet[2445]: E0213 19:21:28.617484 2445 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.68.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:28.618404 kubelet[2445]: I0213 19:21:28.618041 2445 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:21:28.620599 kubelet[2445]: I0213 19:21:28.619478 2445 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:21:28.620599 kubelet[2445]: W0213 19:21:28.619608 2445 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:21:28.620869 kubelet[2445]: I0213 19:21:28.620608 2445 server.go:1264] "Started kubelet" Feb 13 19:21:28.627822 kubelet[2445]: E0213 19:21:28.627358 2445 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.68.30:6443/api/v1/namespaces/default/events\": dial tcp 10.230.68.30:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-g6z5b.gb1.brightbox.com.1823dad4ce150265 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-g6z5b.gb1.brightbox.com,UID:srv-g6z5b.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-g6z5b.gb1.brightbox.com,},FirstTimestamp:2025-02-13 19:21:28.620548709 +0000 UTC m=+0.612680705,LastTimestamp:2025-02-13 19:21:28.620548709 +0000 UTC m=+0.612680705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-g6z5b.gb1.brightbox.com,}" Feb 13 19:21:28.627822 kubelet[2445]: I0213 19:21:28.627667 2445 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:21:28.630565 kubelet[2445]: I0213 19:21:28.629877 2445 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:21:28.630565 kubelet[2445]: I0213 19:21:28.630526 2445 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:21:28.633751 kubelet[2445]: I0213 19:21:28.633728 2445 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:21:28.637636 kubelet[2445]: I0213 19:21:28.637422 2445 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:21:28.649653 kubelet[2445]: I0213 19:21:28.649617 2445 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:21:28.650925 kubelet[2445]: I0213 19:21:28.650855 2445 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:21:28.652310 kubelet[2445]: I0213 19:21:28.651684 2445 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:21:28.652310 kubelet[2445]: W0213 19:21:28.652150 2445 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.68.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:28.652310 kubelet[2445]: E0213 19:21:28.652204 2445 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.68.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:28.652482 kubelet[2445]: E0213 19:21:28.652287 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.68.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g6z5b.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.68.30:6443: connect: connection refused" interval="200ms" Feb 13 19:21:28.652482 kubelet[2445]: E0213 19:21:28.652442 2445 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:21:28.655230 kubelet[2445]: I0213 19:21:28.655202 2445 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:21:28.655357 kubelet[2445]: I0213 19:21:28.655324 2445 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:21:28.658575 kubelet[2445]: I0213 19:21:28.658546 2445 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:21:28.672147 kubelet[2445]: I0213 19:21:28.671980 2445 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:21:28.673830 kubelet[2445]: I0213 19:21:28.673424 2445 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:21:28.673830 kubelet[2445]: I0213 19:21:28.673483 2445 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:21:28.673830 kubelet[2445]: I0213 19:21:28.673528 2445 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:21:28.673830 kubelet[2445]: E0213 19:21:28.673623 2445 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:21:28.684993 kubelet[2445]: W0213 19:21:28.684829 2445 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.68.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:28.684993 kubelet[2445]: E0213 19:21:28.684895 2445 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.68.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:28.704836 kubelet[2445]: I0213 19:21:28.704800 2445 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:21:28.705077 kubelet[2445]: I0213 19:21:28.704848 2445 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:21:28.705077 kubelet[2445]: I0213 19:21:28.704887 2445 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:21:28.709091 kubelet[2445]: I0213 19:21:28.709021 2445 policy_none.go:49] "None policy: Start" Feb 13 19:21:28.710227 kubelet[2445]: I0213 19:21:28.710198 2445 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:21:28.710311 kubelet[2445]: I0213 19:21:28.710242 2445 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:21:28.720558 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:21:28.732879 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:21:28.739724 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:21:28.751956 kubelet[2445]: I0213 19:21:28.751079 2445 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:21:28.751956 kubelet[2445]: I0213 19:21:28.751356 2445 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:21:28.751956 kubelet[2445]: I0213 19:21:28.751568 2445 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:21:28.754036 kubelet[2445]: I0213 19:21:28.753619 2445 kubelet_node_status.go:73] "Attempting to register node" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.754683 kubelet[2445]: E0213 19:21:28.754652 2445 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.68.30:6443/api/v1/nodes\": dial tcp 10.230.68.30:6443: connect: connection refused" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.754957 kubelet[2445]: E0213 19:21:28.754932 2445 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-g6z5b.gb1.brightbox.com\" not found" Feb 13 19:21:28.774446 kubelet[2445]: I0213 19:21:28.774310 2445 topology_manager.go:215] "Topology Admit Handler" podUID="7bc7c4138375e48bbe22dab6d9332b1c" podNamespace="kube-system" podName="kube-apiserver-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.777576 kubelet[2445]: I0213 19:21:28.777097 2445 topology_manager.go:215] "Topology Admit Handler" podUID="7e5c48843ff453cbdacca33fc0e1f64c" podNamespace="kube-system" podName="kube-controller-manager-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.779337 kubelet[2445]: I0213 19:21:28.779294 2445 topology_manager.go:215] "Topology Admit Handler" podUID="c22c5ebcd3b0eac40822cd1557dc6270" podNamespace="kube-system" podName="kube-scheduler-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.788512 systemd[1]: Created slice kubepods-burstable-pod7bc7c4138375e48bbe22dab6d9332b1c.slice - libcontainer container kubepods-burstable-pod7bc7c4138375e48bbe22dab6d9332b1c.slice. Feb 13 19:21:28.806117 systemd[1]: Created slice kubepods-burstable-pod7e5c48843ff453cbdacca33fc0e1f64c.slice - libcontainer container kubepods-burstable-pod7e5c48843ff453cbdacca33fc0e1f64c.slice. Feb 13 19:21:28.823757 systemd[1]: Created slice kubepods-burstable-podc22c5ebcd3b0eac40822cd1557dc6270.slice - libcontainer container kubepods-burstable-podc22c5ebcd3b0eac40822cd1557dc6270.slice. Feb 13 19:21:28.852942 kubelet[2445]: E0213 19:21:28.852870 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.68.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g6z5b.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.68.30:6443: connect: connection refused" interval="400ms" Feb 13 19:21:28.952710 kubelet[2445]: I0213 19:21:28.952634 2445 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bc7c4138375e48bbe22dab6d9332b1c-k8s-certs\") pod \"kube-apiserver-srv-g6z5b.gb1.brightbox.com\" (UID: \"7bc7c4138375e48bbe22dab6d9332b1c\") " pod="kube-system/kube-apiserver-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.952710 kubelet[2445]: I0213 19:21:28.952714 2445 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e5c48843ff453cbdacca33fc0e1f64c-k8s-certs\") pod \"kube-controller-manager-srv-g6z5b.gb1.brightbox.com\" (UID: \"7e5c48843ff453cbdacca33fc0e1f64c\") " pod="kube-system/kube-controller-manager-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.952997 kubelet[2445]: I0213 19:21:28.952751 2445 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c22c5ebcd3b0eac40822cd1557dc6270-kubeconfig\") pod \"kube-scheduler-srv-g6z5b.gb1.brightbox.com\" (UID: \"c22c5ebcd3b0eac40822cd1557dc6270\") " pod="kube-system/kube-scheduler-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.952997 kubelet[2445]: I0213 19:21:28.952808 2445 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bc7c4138375e48bbe22dab6d9332b1c-ca-certs\") pod \"kube-apiserver-srv-g6z5b.gb1.brightbox.com\" (UID: \"7bc7c4138375e48bbe22dab6d9332b1c\") " pod="kube-system/kube-apiserver-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.952997 kubelet[2445]: I0213 19:21:28.952837 2445 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bc7c4138375e48bbe22dab6d9332b1c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-g6z5b.gb1.brightbox.com\" (UID: \"7bc7c4138375e48bbe22dab6d9332b1c\") " pod="kube-system/kube-apiserver-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.952997 kubelet[2445]: I0213 19:21:28.952893 2445 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e5c48843ff453cbdacca33fc0e1f64c-ca-certs\") pod \"kube-controller-manager-srv-g6z5b.gb1.brightbox.com\" (UID: \"7e5c48843ff453cbdacca33fc0e1f64c\") " pod="kube-system/kube-controller-manager-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.952997 kubelet[2445]: I0213 19:21:28.952930 2445 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e5c48843ff453cbdacca33fc0e1f64c-flexvolume-dir\") pod \"kube-controller-manager-srv-g6z5b.gb1.brightbox.com\" (UID: \"7e5c48843ff453cbdacca33fc0e1f64c\") " pod="kube-system/kube-controller-manager-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.953241 kubelet[2445]: I0213 19:21:28.952957 2445 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e5c48843ff453cbdacca33fc0e1f64c-kubeconfig\") pod \"kube-controller-manager-srv-g6z5b.gb1.brightbox.com\" (UID: \"7e5c48843ff453cbdacca33fc0e1f64c\") " pod="kube-system/kube-controller-manager-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.953241 kubelet[2445]: I0213 19:21:28.952983 2445 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e5c48843ff453cbdacca33fc0e1f64c-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-g6z5b.gb1.brightbox.com\" (UID: \"7e5c48843ff453cbdacca33fc0e1f64c\") " pod="kube-system/kube-controller-manager-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.958481 kubelet[2445]: I0213 19:21:28.958101 2445 kubelet_node_status.go:73] "Attempting to register node" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:28.958816 kubelet[2445]: E0213 19:21:28.958633 2445 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.68.30:6443/api/v1/nodes\": dial tcp 10.230.68.30:6443: connect: connection refused" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:29.104405 containerd[1536]: time="2025-02-13T19:21:29.104278717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-g6z5b.gb1.brightbox.com,Uid:7bc7c4138375e48bbe22dab6d9332b1c,Namespace:kube-system,Attempt:0,}" Feb 13 19:21:29.122208 containerd[1536]: time="2025-02-13T19:21:29.121988983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-g6z5b.gb1.brightbox.com,Uid:7e5c48843ff453cbdacca33fc0e1f64c,Namespace:kube-system,Attempt:0,}" Feb 13 19:21:29.128319 containerd[1536]: time="2025-02-13T19:21:29.128092821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-g6z5b.gb1.brightbox.com,Uid:c22c5ebcd3b0eac40822cd1557dc6270,Namespace:kube-system,Attempt:0,}" Feb 13 19:21:29.254125 kubelet[2445]: E0213 19:21:29.254032 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.68.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g6z5b.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.68.30:6443: connect: connection refused" interval="800ms" Feb 13 19:21:29.363932 kubelet[2445]: I0213 19:21:29.363720 2445 kubelet_node_status.go:73] "Attempting to register node" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:29.364569 kubelet[2445]: E0213 19:21:29.364514 2445 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.68.30:6443/api/v1/nodes\": dial tcp 10.230.68.30:6443: connect: connection refused" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:29.655749 kubelet[2445]: W0213 19:21:29.655525 2445 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.68.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:29.655749 kubelet[2445]: E0213 19:21:29.655598 2445 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.68.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:29.687342 kubelet[2445]: W0213 19:21:29.686897 2445 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.68.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-g6z5b.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:29.687342 kubelet[2445]: E0213 19:21:29.687087 2445 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.68.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-g6z5b.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:29.698981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696371443.mount: Deactivated successfully. Feb 13 19:21:29.703730 containerd[1536]: time="2025-02-13T19:21:29.703655926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:21:29.705611 containerd[1536]: time="2025-02-13T19:21:29.705342314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 19:21:29.708922 containerd[1536]: time="2025-02-13T19:21:29.708868686Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:21:29.710875 containerd[1536]: time="2025-02-13T19:21:29.710840997Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:21:29.712063 containerd[1536]: time="2025-02-13T19:21:29.711961041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:21:29.713672 containerd[1536]: time="2025-02-13T19:21:29.713618126Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:21:29.714860 containerd[1536]: time="2025-02-13T19:21:29.714811065Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:21:29.715472 containerd[1536]: time="2025-02-13T19:21:29.715354300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:21:29.719691 containerd[1536]: time="2025-02-13T19:21:29.718899166Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 596.754212ms" Feb 13 19:21:29.722299 containerd[1536]: time="2025-02-13T19:21:29.722153015Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 617.538088ms" Feb 13 19:21:29.734792 containerd[1536]: time="2025-02-13T19:21:29.734557881Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 606.345603ms" Feb 13 19:21:29.786175 kubelet[2445]: W0213 19:21:29.782934 2445 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.68.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:29.786175 kubelet[2445]: E0213 19:21:29.783026 2445 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.68.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:29.941966 containerd[1536]: time="2025-02-13T19:21:29.938735787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:21:29.941966 containerd[1536]: time="2025-02-13T19:21:29.940480928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:21:29.941966 containerd[1536]: time="2025-02-13T19:21:29.940519510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:29.941966 containerd[1536]: time="2025-02-13T19:21:29.940637824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:29.946571 containerd[1536]: time="2025-02-13T19:21:29.946110407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:21:29.946571 containerd[1536]: time="2025-02-13T19:21:29.946193999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:21:29.946571 containerd[1536]: time="2025-02-13T19:21:29.946218670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:29.946978 containerd[1536]: time="2025-02-13T19:21:29.946829810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:29.951276 containerd[1536]: time="2025-02-13T19:21:29.951189082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:21:29.951469 containerd[1536]: time="2025-02-13T19:21:29.951412094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:21:29.951835 containerd[1536]: time="2025-02-13T19:21:29.951593791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:29.952053 containerd[1536]: time="2025-02-13T19:21:29.951933683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:30.003016 systemd[1]: Started cri-containerd-85476aa5c660e28cc4a79e4d4b904106d28bdc99dc3d22bec5cd3ee4931c820a.scope - libcontainer container 85476aa5c660e28cc4a79e4d4b904106d28bdc99dc3d22bec5cd3ee4931c820a. Feb 13 19:21:30.015989 systemd[1]: Started cri-containerd-11f1b894381ddb58bc2bfc22d5e651dd766f6161378b509cda5b5a011cac42c3.scope - libcontainer container 11f1b894381ddb58bc2bfc22d5e651dd766f6161378b509cda5b5a011cac42c3. Feb 13 19:21:30.020112 systemd[1]: Started cri-containerd-60115acc1cc330a2d56264472c1c3a9bc88b1105fdabc795df9ab5d1666b1d5d.scope - libcontainer container 60115acc1cc330a2d56264472c1c3a9bc88b1105fdabc795df9ab5d1666b1d5d. Feb 13 19:21:30.055347 kubelet[2445]: E0213 19:21:30.055287 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.68.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g6z5b.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.68.30:6443: connect: connection refused" interval="1.6s" Feb 13 19:21:30.116842 containerd[1536]: time="2025-02-13T19:21:30.116760305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-g6z5b.gb1.brightbox.com,Uid:7e5c48843ff453cbdacca33fc0e1f64c,Namespace:kube-system,Attempt:0,} returns sandbox id \"11f1b894381ddb58bc2bfc22d5e651dd766f6161378b509cda5b5a011cac42c3\"" Feb 13 19:21:30.135159 containerd[1536]: time="2025-02-13T19:21:30.134194524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-g6z5b.gb1.brightbox.com,Uid:c22c5ebcd3b0eac40822cd1557dc6270,Namespace:kube-system,Attempt:0,} returns sandbox id \"85476aa5c660e28cc4a79e4d4b904106d28bdc99dc3d22bec5cd3ee4931c820a\"" Feb 13 19:21:30.135159 containerd[1536]: time="2025-02-13T19:21:30.134988260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-g6z5b.gb1.brightbox.com,Uid:7bc7c4138375e48bbe22dab6d9332b1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"60115acc1cc330a2d56264472c1c3a9bc88b1105fdabc795df9ab5d1666b1d5d\"" Feb 13 19:21:30.142870 kubelet[2445]: W0213 19:21:30.141247 2445 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.68.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:30.142870 kubelet[2445]: E0213 19:21:30.141338 2445 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.68.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:30.143308 containerd[1536]: time="2025-02-13T19:21:30.143271228Z" level=info msg="CreateContainer within sandbox \"11f1b894381ddb58bc2bfc22d5e651dd766f6161378b509cda5b5a011cac42c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:21:30.146367 containerd[1536]: time="2025-02-13T19:21:30.146230721Z" level=info msg="CreateContainer within sandbox \"60115acc1cc330a2d56264472c1c3a9bc88b1105fdabc795df9ab5d1666b1d5d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:21:30.148212 containerd[1536]: time="2025-02-13T19:21:30.148078615Z" level=info msg="CreateContainer within sandbox \"85476aa5c660e28cc4a79e4d4b904106d28bdc99dc3d22bec5cd3ee4931c820a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:21:30.167684 containerd[1536]: time="2025-02-13T19:21:30.167638316Z" level=info msg="CreateContainer within sandbox \"11f1b894381ddb58bc2bfc22d5e651dd766f6161378b509cda5b5a011cac42c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d7b0aee164ac19658a48b5c8e4c590cc3d63eadb4f0eec3f19033f5f0d803f2b\"" Feb 13 19:21:30.168804 containerd[1536]: time="2025-02-13T19:21:30.168405333Z" level=info msg="StartContainer for \"d7b0aee164ac19658a48b5c8e4c590cc3d63eadb4f0eec3f19033f5f0d803f2b\"" Feb 13 19:21:30.169070 kubelet[2445]: I0213 19:21:30.169039 2445 kubelet_node_status.go:73] "Attempting to register node" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:30.170091 kubelet[2445]: E0213 19:21:30.169905 2445 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.68.30:6443/api/v1/nodes\": dial tcp 10.230.68.30:6443: connect: connection refused" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:30.172126 containerd[1536]: time="2025-02-13T19:21:30.172088876Z" level=info msg="CreateContainer within sandbox \"85476aa5c660e28cc4a79e4d4b904106d28bdc99dc3d22bec5cd3ee4931c820a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"827183141996f07d01cbb33334e34ef6b6620ffc1a1ea80f16f84ec5cb599e98\"" Feb 13 19:21:30.172864 containerd[1536]: time="2025-02-13T19:21:30.172830812Z" level=info msg="StartContainer for \"827183141996f07d01cbb33334e34ef6b6620ffc1a1ea80f16f84ec5cb599e98\"" Feb 13 19:21:30.173931 containerd[1536]: time="2025-02-13T19:21:30.173886143Z" level=info msg="CreateContainer within sandbox \"60115acc1cc330a2d56264472c1c3a9bc88b1105fdabc795df9ab5d1666b1d5d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a1eb5ab95bca30ca196352f0fd79183d5e8c9e081bd3c98af45b10cb2a0f6c8\"" Feb 13 19:21:30.174311 containerd[1536]: time="2025-02-13T19:21:30.174279892Z" level=info msg="StartContainer for \"9a1eb5ab95bca30ca196352f0fd79183d5e8c9e081bd3c98af45b10cb2a0f6c8\"" Feb 13 19:21:30.206951 systemd[1]: Started cri-containerd-d7b0aee164ac19658a48b5c8e4c590cc3d63eadb4f0eec3f19033f5f0d803f2b.scope - libcontainer container d7b0aee164ac19658a48b5c8e4c590cc3d63eadb4f0eec3f19033f5f0d803f2b. Feb 13 19:21:30.252054 systemd[1]: Started cri-containerd-827183141996f07d01cbb33334e34ef6b6620ffc1a1ea80f16f84ec5cb599e98.scope - libcontainer container 827183141996f07d01cbb33334e34ef6b6620ffc1a1ea80f16f84ec5cb599e98. Feb 13 19:21:30.254924 systemd[1]: Started cri-containerd-9a1eb5ab95bca30ca196352f0fd79183d5e8c9e081bd3c98af45b10cb2a0f6c8.scope - libcontainer container 9a1eb5ab95bca30ca196352f0fd79183d5e8c9e081bd3c98af45b10cb2a0f6c8. Feb 13 19:21:30.318196 containerd[1536]: time="2025-02-13T19:21:30.318137669Z" level=info msg="StartContainer for \"d7b0aee164ac19658a48b5c8e4c590cc3d63eadb4f0eec3f19033f5f0d803f2b\" returns successfully" Feb 13 19:21:30.353204 containerd[1536]: time="2025-02-13T19:21:30.352987954Z" level=info msg="StartContainer for \"9a1eb5ab95bca30ca196352f0fd79183d5e8c9e081bd3c98af45b10cb2a0f6c8\" returns successfully" Feb 13 19:21:30.374997 containerd[1536]: time="2025-02-13T19:21:30.374947056Z" level=info msg="StartContainer for \"827183141996f07d01cbb33334e34ef6b6620ffc1a1ea80f16f84ec5cb599e98\" returns successfully" Feb 13 19:21:30.641537 kubelet[2445]: E0213 19:21:30.641461 2445 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.68.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.68.30:6443: connect: connection refused Feb 13 19:21:31.773369 kubelet[2445]: I0213 19:21:31.773330 2445 kubelet_node_status.go:73] "Attempting to register node" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:33.456061 kubelet[2445]: E0213 19:21:33.455978 2445 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-g6z5b.gb1.brightbox.com\" not found" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:33.475839 kubelet[2445]: E0213 19:21:33.475471 2445 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-g6z5b.gb1.brightbox.com.1823dad4ce150265 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-g6z5b.gb1.brightbox.com,UID:srv-g6z5b.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-g6z5b.gb1.brightbox.com,},FirstTimestamp:2025-02-13 19:21:28.620548709 +0000 UTC m=+0.612680705,LastTimestamp:2025-02-13 19:21:28.620548709 +0000 UTC m=+0.612680705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-g6z5b.gb1.brightbox.com,}" Feb 13 19:21:33.524474 kubelet[2445]: I0213 19:21:33.524407 2445 kubelet_node_status.go:76] "Successfully registered node" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:33.548963 kubelet[2445]: E0213 19:21:33.548726 2445 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-g6z5b.gb1.brightbox.com.1823dad4cffb696a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-g6z5b.gb1.brightbox.com,UID:srv-g6z5b.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:srv-g6z5b.gb1.brightbox.com,},FirstTimestamp:2025-02-13 19:21:28.652425578 +0000 UTC m=+0.644557576,LastTimestamp:2025-02-13 19:21:28.652425578 +0000 UTC m=+0.644557576,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-g6z5b.gb1.brightbox.com,}" Feb 13 19:21:33.620347 kubelet[2445]: I0213 19:21:33.620288 2445 apiserver.go:52] "Watching apiserver" Feb 13 19:21:33.652308 kubelet[2445]: I0213 19:21:33.652236 2445 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:21:35.374221 kubelet[2445]: W0213 19:21:35.374140 2445 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:21:35.452542 systemd[1]: Reload requested from client PID 2729 ('systemctl') (unit session-11.scope)... Feb 13 19:21:35.452579 systemd[1]: Reloading... Feb 13 19:21:35.586827 zram_generator::config[2781]: No configuration found. Feb 13 19:21:35.758129 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:21:35.931199 systemd[1]: Reloading finished in 477 ms. Feb 13 19:21:35.969822 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:35.977312 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:21:35.977800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:35.977877 systemd[1]: kubelet.service: Consumed 1.168s CPU time, 113.4M memory peak. Feb 13 19:21:35.989009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:21:36.188513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:21:36.202404 (kubelet)[2838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:21:36.332691 kubelet[2838]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:21:36.332691 kubelet[2838]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:21:36.332691 kubelet[2838]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:21:36.333536 kubelet[2838]: I0213 19:21:36.332759 2838 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:21:36.346057 kubelet[2838]: I0213 19:21:36.345983 2838 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:21:36.346057 kubelet[2838]: I0213 19:21:36.346041 2838 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:21:36.346455 kubelet[2838]: I0213 19:21:36.346429 2838 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:21:36.348709 kubelet[2838]: I0213 19:21:36.348675 2838 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:21:36.350683 kubelet[2838]: I0213 19:21:36.350459 2838 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:21:36.362160 kubelet[2838]: I0213 19:21:36.362080 2838 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:21:36.362904 kubelet[2838]: I0213 19:21:36.362691 2838 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:21:36.363147 kubelet[2838]: I0213 19:21:36.362888 2838 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-g6z5b.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:21:36.363326 kubelet[2838]: I0213 19:21:36.363160 2838 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:21:36.363326 kubelet[2838]: I0213 19:21:36.363192 2838 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:21:36.363326 kubelet[2838]: I0213 19:21:36.363262 2838 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:21:36.363515 kubelet[2838]: I0213 19:21:36.363480 2838 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:21:36.363720 kubelet[2838]: I0213 19:21:36.363501 2838 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:21:36.364586 kubelet[2838]: I0213 19:21:36.364547 2838 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:21:36.364711 kubelet[2838]: I0213 19:21:36.364613 2838 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:21:36.367892 kubelet[2838]: I0213 19:21:36.367457 2838 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:21:36.370816 kubelet[2838]: I0213 19:21:36.369282 2838 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:21:36.372919 kubelet[2838]: I0213 19:21:36.372067 2838 server.go:1264] "Started kubelet" Feb 13 19:21:36.383515 kubelet[2838]: I0213 19:21:36.383420 2838 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:21:36.384263 kubelet[2838]: I0213 19:21:36.384222 2838 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:21:36.384486 kubelet[2838]: I0213 19:21:36.384456 2838 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:21:36.388690 kubelet[2838]: I0213 19:21:36.388624 2838 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:21:36.389795 kubelet[2838]: I0213 19:21:36.389662 2838 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:21:36.402678 kubelet[2838]: I0213 19:21:36.402505 2838 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:21:36.411460 kubelet[2838]: I0213 19:21:36.411433 2838 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:21:36.412048 kubelet[2838]: I0213 19:21:36.411891 2838 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:21:36.419751 kubelet[2838]: I0213 19:21:36.419573 2838 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:21:36.420397 kubelet[2838]: I0213 19:21:36.420243 2838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:21:36.422370 kubelet[2838]: I0213 19:21:36.422345 2838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:21:36.422514 kubelet[2838]: I0213 19:21:36.422495 2838 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:21:36.423726 kubelet[2838]: I0213 19:21:36.422653 2838 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:21:36.423726 kubelet[2838]: E0213 19:21:36.422742 2838 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:21:36.423726 kubelet[2838]: I0213 19:21:36.422828 2838 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:21:36.433829 kubelet[2838]: I0213 19:21:36.433550 2838 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:21:36.446493 kubelet[2838]: E0213 19:21:36.446366 2838 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:21:36.481655 sudo[2866]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:21:36.483402 sudo[2866]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:21:36.525384 kubelet[2838]: E0213 19:21:36.523026 2838 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:21:36.529717 kubelet[2838]: I0213 19:21:36.529156 2838 kubelet_node_status.go:73] "Attempting to register node" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.554453 kubelet[2838]: I0213 19:21:36.554023 2838 kubelet_node_status.go:112] "Node was previously registered" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.554453 kubelet[2838]: I0213 19:21:36.554161 2838 kubelet_node_status.go:76] "Successfully registered node" node="srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.596117 kubelet[2838]: I0213 19:21:36.596056 2838 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:21:36.596117 kubelet[2838]: I0213 19:21:36.596092 2838 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:21:36.596351 kubelet[2838]: I0213 19:21:36.596144 2838 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:21:36.596911 kubelet[2838]: I0213 19:21:36.596469 2838 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:21:36.596911 kubelet[2838]: I0213 19:21:36.596499 2838 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:21:36.596911 kubelet[2838]: I0213 19:21:36.596536 2838 policy_none.go:49] "None policy: Start" Feb 13 19:21:36.601253 kubelet[2838]: I0213 19:21:36.601209 2838 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:21:36.601371 kubelet[2838]: I0213 19:21:36.601270 2838 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:21:36.601973 kubelet[2838]: I0213 19:21:36.601634 2838 state_mem.go:75] "Updated machine memory state" Feb 13 19:21:36.624601 kubelet[2838]: I0213 19:21:36.624208 2838 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:21:36.626510 kubelet[2838]: I0213 19:21:36.625415 2838 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:21:36.630813 kubelet[2838]: I0213 19:21:36.629139 2838 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:21:36.724246 kubelet[2838]: I0213 19:21:36.723844 2838 topology_manager.go:215] "Topology Admit Handler" podUID="7e5c48843ff453cbdacca33fc0e1f64c" podNamespace="kube-system" podName="kube-controller-manager-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.726045 kubelet[2838]: I0213 19:21:36.724499 2838 topology_manager.go:215] "Topology Admit Handler" podUID="c22c5ebcd3b0eac40822cd1557dc6270" podNamespace="kube-system" podName="kube-scheduler-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.726045 kubelet[2838]: I0213 19:21:36.724603 2838 topology_manager.go:215] "Topology Admit Handler" podUID="7bc7c4138375e48bbe22dab6d9332b1c" podNamespace="kube-system" podName="kube-apiserver-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.740312 kubelet[2838]: W0213 19:21:36.739849 2838 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:21:36.745826 kubelet[2838]: W0213 19:21:36.745079 2838 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:21:36.745826 kubelet[2838]: E0213 19:21:36.745173 2838 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-srv-g6z5b.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.745826 kubelet[2838]: W0213 19:21:36.745286 2838 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:21:36.914709 kubelet[2838]: I0213 19:21:36.914639 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e5c48843ff453cbdacca33fc0e1f64c-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-g6z5b.gb1.brightbox.com\" (UID: \"7e5c48843ff453cbdacca33fc0e1f64c\") " pod="kube-system/kube-controller-manager-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.915459 kubelet[2838]: I0213 19:21:36.915277 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c22c5ebcd3b0eac40822cd1557dc6270-kubeconfig\") pod \"kube-scheduler-srv-g6z5b.gb1.brightbox.com\" (UID: \"c22c5ebcd3b0eac40822cd1557dc6270\") " pod="kube-system/kube-scheduler-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.915459 kubelet[2838]: I0213 19:21:36.915378 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bc7c4138375e48bbe22dab6d9332b1c-ca-certs\") pod \"kube-apiserver-srv-g6z5b.gb1.brightbox.com\" (UID: \"7bc7c4138375e48bbe22dab6d9332b1c\") " pod="kube-system/kube-apiserver-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.915670 kubelet[2838]: I0213 19:21:36.915536 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bc7c4138375e48bbe22dab6d9332b1c-k8s-certs\") pod \"kube-apiserver-srv-g6z5b.gb1.brightbox.com\" (UID: \"7bc7c4138375e48bbe22dab6d9332b1c\") " pod="kube-system/kube-apiserver-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.915972 kubelet[2838]: I0213 19:21:36.915570 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e5c48843ff453cbdacca33fc0e1f64c-ca-certs\") pod \"kube-controller-manager-srv-g6z5b.gb1.brightbox.com\" (UID: \"7e5c48843ff453cbdacca33fc0e1f64c\") " pod="kube-system/kube-controller-manager-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.915972 kubelet[2838]: I0213 19:21:36.915914 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e5c48843ff453cbdacca33fc0e1f64c-flexvolume-dir\") pod \"kube-controller-manager-srv-g6z5b.gb1.brightbox.com\" (UID: \"7e5c48843ff453cbdacca33fc0e1f64c\") " pod="kube-system/kube-controller-manager-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.916229 kubelet[2838]: I0213 19:21:36.916105 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e5c48843ff453cbdacca33fc0e1f64c-k8s-certs\") pod \"kube-controller-manager-srv-g6z5b.gb1.brightbox.com\" (UID: \"7e5c48843ff453cbdacca33fc0e1f64c\") " pod="kube-system/kube-controller-manager-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.916472 kubelet[2838]: I0213 19:21:36.916356 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e5c48843ff453cbdacca33fc0e1f64c-kubeconfig\") pod \"kube-controller-manager-srv-g6z5b.gb1.brightbox.com\" (UID: \"7e5c48843ff453cbdacca33fc0e1f64c\") " pod="kube-system/kube-controller-manager-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:36.916472 kubelet[2838]: I0213 19:21:36.916416 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bc7c4138375e48bbe22dab6d9332b1c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-g6z5b.gb1.brightbox.com\" (UID: \"7bc7c4138375e48bbe22dab6d9332b1c\") " pod="kube-system/kube-apiserver-srv-g6z5b.gb1.brightbox.com" Feb 13 19:21:37.303473 sudo[2866]: pam_unix(sudo:session): session closed for user root Feb 13 19:21:37.372965 kubelet[2838]: I0213 19:21:37.372886 2838 apiserver.go:52] "Watching apiserver" Feb 13 19:21:37.412504 kubelet[2838]: I0213 19:21:37.412298 2838 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:21:37.626441 kubelet[2838]: I0213 19:21:37.626155 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-g6z5b.gb1.brightbox.com" podStartSLOduration=1.626065669 podStartE2EDuration="1.626065669s" podCreationTimestamp="2025-02-13 19:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:21:37.576618868 +0000 UTC m=+1.349717522" watchObservedRunningTime="2025-02-13 19:21:37.626065669 +0000 UTC m=+1.399164313" Feb 13 19:21:37.684548 kubelet[2838]: I0213 19:21:37.683465 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-g6z5b.gb1.brightbox.com" podStartSLOduration=1.683436957 podStartE2EDuration="1.683436957s" podCreationTimestamp="2025-02-13 19:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:21:37.630214111 +0000 UTC m=+1.403312772" watchObservedRunningTime="2025-02-13 19:21:37.683436957 +0000 UTC m=+1.456535595" Feb 13 19:21:37.755603 kubelet[2838]: I0213 19:21:37.755402 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-g6z5b.gb1.brightbox.com" podStartSLOduration=2.75536963 podStartE2EDuration="2.75536963s" podCreationTimestamp="2025-02-13 19:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:21:37.689065835 +0000 UTC m=+1.462164486" watchObservedRunningTime="2025-02-13 19:21:37.75536963 +0000 UTC m=+1.528468274" Feb 13 19:21:39.119211 sudo[1809]: pam_unix(sudo:session): session closed for user root Feb 13 19:21:39.264854 sshd[1808]: Connection closed by 139.178.89.65 port 59906 Feb 13 19:21:39.266672 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:39.271257 systemd-logind[1514]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:21:39.271536 systemd[1]: sshd@8-10.230.68.30:22-139.178.89.65:59906.service: Deactivated successfully. Feb 13 19:21:39.276316 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:21:39.276635 systemd[1]: session-11.scope: Consumed 6.942s CPU time, 233.9M memory peak. Feb 13 19:21:39.281368 systemd-logind[1514]: Removed session 11. Feb 13 19:21:49.278103 kubelet[2838]: I0213 19:21:49.277939 2838 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:21:49.279612 containerd[1536]: time="2025-02-13T19:21:49.279166737Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:21:49.280755 kubelet[2838]: I0213 19:21:49.279971 2838 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:21:50.209746 kubelet[2838]: I0213 19:21:50.209659 2838 topology_manager.go:215] "Topology Admit Handler" podUID="36337490-6e20-4a3a-a4e4-09851e1109de" podNamespace="kube-system" podName="kube-proxy-cvc8p" Feb 13 19:21:50.229754 systemd[1]: Created slice kubepods-besteffort-pod36337490_6e20_4a3a_a4e4_09851e1109de.slice - libcontainer container kubepods-besteffort-pod36337490_6e20_4a3a_a4e4_09851e1109de.slice. Feb 13 19:21:50.250069 kubelet[2838]: I0213 19:21:50.248568 2838 topology_manager.go:215] "Topology Admit Handler" podUID="94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" podNamespace="kube-system" podName="cilium-jnvv4" Feb 13 19:21:50.262115 systemd[1]: Created slice kubepods-burstable-pod94c62ab6_ca3c_4883_8aa8_b9c674c0c22f.slice - libcontainer container kubepods-burstable-pod94c62ab6_ca3c_4883_8aa8_b9c674c0c22f.slice. Feb 13 19:21:50.266408 kubelet[2838]: W0213 19:21:50.266372 2838 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-g6z5b.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-g6z5b.gb1.brightbox.com' and this object Feb 13 19:21:50.267107 kubelet[2838]: E0213 19:21:50.266727 2838 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-g6z5b.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-g6z5b.gb1.brightbox.com' and this object Feb 13 19:21:50.279533 kubelet[2838]: I0213 19:21:50.279032 2838 topology_manager.go:215] "Topology Admit Handler" podUID="9407df4b-472a-45a6-82d6-a6c17818fd9b" podNamespace="kube-system" podName="cilium-operator-599987898-ng4qf" Feb 13 19:21:50.297494 systemd[1]: Created slice kubepods-besteffort-pod9407df4b_472a_45a6_82d6_a6c17818fd9b.slice - libcontainer container kubepods-besteffort-pod9407df4b_472a_45a6_82d6_a6c17818fd9b.slice. Feb 13 19:21:50.302801 kubelet[2838]: I0213 19:21:50.300019 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cilium-run\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.302801 kubelet[2838]: I0213 19:21:50.300086 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-bpf-maps\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.302801 kubelet[2838]: I0213 19:21:50.300161 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-clustermesh-secrets\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.302801 kubelet[2838]: I0213 19:21:50.300193 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cilium-config-path\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.302801 kubelet[2838]: I0213 19:21:50.300236 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-host-proc-sys-kernel\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.303222 kubelet[2838]: I0213 19:21:50.300280 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcdl5\" (UniqueName: \"kubernetes.io/projected/36337490-6e20-4a3a-a4e4-09851e1109de-kube-api-access-wcdl5\") pod \"kube-proxy-cvc8p\" (UID: \"36337490-6e20-4a3a-a4e4-09851e1109de\") " pod="kube-system/kube-proxy-cvc8p" Feb 13 19:21:50.303222 kubelet[2838]: I0213 19:21:50.300372 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cilium-cgroup\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.303222 kubelet[2838]: I0213 19:21:50.300422 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-lib-modules\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.303222 kubelet[2838]: I0213 19:21:50.300463 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cni-path\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.303222 kubelet[2838]: I0213 19:21:50.300497 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-hubble-tls\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.303490 kubelet[2838]: I0213 19:21:50.300545 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74pkj\" (UniqueName: \"kubernetes.io/projected/9407df4b-472a-45a6-82d6-a6c17818fd9b-kube-api-access-74pkj\") pod \"cilium-operator-599987898-ng4qf\" (UID: \"9407df4b-472a-45a6-82d6-a6c17818fd9b\") " pod="kube-system/cilium-operator-599987898-ng4qf" Feb 13 19:21:50.303490 kubelet[2838]: I0213 19:21:50.300583 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-hostproc\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.303490 kubelet[2838]: I0213 19:21:50.300621 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-etc-cni-netd\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.303490 kubelet[2838]: I0213 19:21:50.300646 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-host-proc-sys-net\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.303490 kubelet[2838]: I0213 19:21:50.300673 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36337490-6e20-4a3a-a4e4-09851e1109de-xtables-lock\") pod \"kube-proxy-cvc8p\" (UID: \"36337490-6e20-4a3a-a4e4-09851e1109de\") " pod="kube-system/kube-proxy-cvc8p" Feb 13 19:21:50.303746 kubelet[2838]: I0213 19:21:50.300697 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36337490-6e20-4a3a-a4e4-09851e1109de-lib-modules\") pod \"kube-proxy-cvc8p\" (UID: \"36337490-6e20-4a3a-a4e4-09851e1109de\") " pod="kube-system/kube-proxy-cvc8p" Feb 13 19:21:50.303746 kubelet[2838]: I0213 19:21:50.300720 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-xtables-lock\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.303746 kubelet[2838]: I0213 19:21:50.300750 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hfkg\" (UniqueName: \"kubernetes.io/projected/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-kube-api-access-8hfkg\") pod \"cilium-jnvv4\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " pod="kube-system/cilium-jnvv4" Feb 13 19:21:50.303746 kubelet[2838]: I0213 19:21:50.300791 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9407df4b-472a-45a6-82d6-a6c17818fd9b-cilium-config-path\") pod \"cilium-operator-599987898-ng4qf\" (UID: \"9407df4b-472a-45a6-82d6-a6c17818fd9b\") " pod="kube-system/cilium-operator-599987898-ng4qf" Feb 13 19:21:50.303746 kubelet[2838]: I0213 19:21:50.300820 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36337490-6e20-4a3a-a4e4-09851e1109de-kube-proxy\") pod \"kube-proxy-cvc8p\" (UID: \"36337490-6e20-4a3a-a4e4-09851e1109de\") " pod="kube-system/kube-proxy-cvc8p" Feb 13 19:21:50.544923 containerd[1536]: time="2025-02-13T19:21:50.544713175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvc8p,Uid:36337490-6e20-4a3a-a4e4-09851e1109de,Namespace:kube-system,Attempt:0,}" Feb 13 19:21:50.590254 containerd[1536]: time="2025-02-13T19:21:50.589843302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:21:50.590254 containerd[1536]: time="2025-02-13T19:21:50.590074253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:21:50.590254 containerd[1536]: time="2025-02-13T19:21:50.590145291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:50.590623 containerd[1536]: time="2025-02-13T19:21:50.590496804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:50.608301 containerd[1536]: time="2025-02-13T19:21:50.608254772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ng4qf,Uid:9407df4b-472a-45a6-82d6-a6c17818fd9b,Namespace:kube-system,Attempt:0,}" Feb 13 19:21:50.623037 systemd[1]: Started cri-containerd-88f422d1a55e62b4a2c718f958d4cd3a65e1c65f91db5bdb87de6da68bba590f.scope - libcontainer container 88f422d1a55e62b4a2c718f958d4cd3a65e1c65f91db5bdb87de6da68bba590f. Feb 13 19:21:50.670336 containerd[1536]: time="2025-02-13T19:21:50.669913764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:21:50.670336 containerd[1536]: time="2025-02-13T19:21:50.670027261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:21:50.670336 containerd[1536]: time="2025-02-13T19:21:50.670052879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:50.670336 containerd[1536]: time="2025-02-13T19:21:50.670229450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:50.682887 containerd[1536]: time="2025-02-13T19:21:50.682691823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvc8p,Uid:36337490-6e20-4a3a-a4e4-09851e1109de,Namespace:kube-system,Attempt:0,} returns sandbox id \"88f422d1a55e62b4a2c718f958d4cd3a65e1c65f91db5bdb87de6da68bba590f\"" Feb 13 19:21:50.691058 containerd[1536]: time="2025-02-13T19:21:50.690888272Z" level=info msg="CreateContainer within sandbox \"88f422d1a55e62b4a2c718f958d4cd3a65e1c65f91db5bdb87de6da68bba590f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:21:50.708977 systemd[1]: Started cri-containerd-1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec.scope - libcontainer container 1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec. Feb 13 19:21:50.742793 containerd[1536]: time="2025-02-13T19:21:50.742706108Z" level=info msg="CreateContainer within sandbox \"88f422d1a55e62b4a2c718f958d4cd3a65e1c65f91db5bdb87de6da68bba590f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5a8b3840c971a5a1e18a0f937398aa6ac4976a27ec2cfcdd25796fc6bd24fccd\"" Feb 13 19:21:50.745169 containerd[1536]: time="2025-02-13T19:21:50.744942506Z" level=info msg="StartContainer for \"5a8b3840c971a5a1e18a0f937398aa6ac4976a27ec2cfcdd25796fc6bd24fccd\"" Feb 13 19:21:50.788878 containerd[1536]: time="2025-02-13T19:21:50.788399285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ng4qf,Uid:9407df4b-472a-45a6-82d6-a6c17818fd9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\"" Feb 13 19:21:50.796296 containerd[1536]: time="2025-02-13T19:21:50.795892256Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:21:50.808008 systemd[1]: Started cri-containerd-5a8b3840c971a5a1e18a0f937398aa6ac4976a27ec2cfcdd25796fc6bd24fccd.scope - libcontainer container 5a8b3840c971a5a1e18a0f937398aa6ac4976a27ec2cfcdd25796fc6bd24fccd. Feb 13 19:21:50.855587 containerd[1536]: time="2025-02-13T19:21:50.855413485Z" level=info msg="StartContainer for \"5a8b3840c971a5a1e18a0f937398aa6ac4976a27ec2cfcdd25796fc6bd24fccd\" returns successfully" Feb 13 19:21:51.174228 containerd[1536]: time="2025-02-13T19:21:51.174158574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jnvv4,Uid:94c62ab6-ca3c-4883-8aa8-b9c674c0c22f,Namespace:kube-system,Attempt:0,}" Feb 13 19:21:51.230797 containerd[1536]: time="2025-02-13T19:21:51.229205645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:21:51.230797 containerd[1536]: time="2025-02-13T19:21:51.229323609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:21:51.230797 containerd[1536]: time="2025-02-13T19:21:51.229341453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:51.230797 containerd[1536]: time="2025-02-13T19:21:51.229506013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:51.262068 systemd[1]: Started cri-containerd-da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535.scope - libcontainer container da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535. Feb 13 19:21:51.303675 containerd[1536]: time="2025-02-13T19:21:51.302839130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jnvv4,Uid:94c62ab6-ca3c-4883-8aa8-b9c674c0c22f,Namespace:kube-system,Attempt:0,} returns sandbox id \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\"" Feb 13 19:21:53.380836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415981606.mount: Deactivated successfully. Feb 13 19:21:54.279633 containerd[1536]: time="2025-02-13T19:21:54.278945861Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:54.281738 containerd[1536]: time="2025-02-13T19:21:54.280272432Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 19:21:54.281738 containerd[1536]: time="2025-02-13T19:21:54.281209698Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:21:54.283997 containerd[1536]: time="2025-02-13T19:21:54.283701131Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.487456654s" Feb 13 19:21:54.283997 containerd[1536]: time="2025-02-13T19:21:54.283748986Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 19:21:54.286194 containerd[1536]: time="2025-02-13T19:21:54.286155869Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:21:54.292662 containerd[1536]: time="2025-02-13T19:21:54.292462956Z" level=info msg="CreateContainer within sandbox \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:21:54.332897 containerd[1536]: time="2025-02-13T19:21:54.332751148Z" level=info msg="CreateContainer within sandbox \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\"" Feb 13 19:21:54.334698 containerd[1536]: time="2025-02-13T19:21:54.334643727Z" level=info msg="StartContainer for \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\"" Feb 13 19:21:54.395026 systemd[1]: Started cri-containerd-3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d.scope - libcontainer container 3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d. Feb 13 19:21:54.439561 containerd[1536]: time="2025-02-13T19:21:54.439496575Z" level=info msg="StartContainer for \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\" returns successfully" Feb 13 19:21:54.560317 kubelet[2838]: I0213 19:21:54.558329 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cvc8p" podStartSLOduration=4.55826357 podStartE2EDuration="4.55826357s" podCreationTimestamp="2025-02-13 19:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:21:51.5393306 +0000 UTC m=+15.312429245" watchObservedRunningTime="2025-02-13 19:21:54.55826357 +0000 UTC m=+18.331362218" Feb 13 19:21:56.469869 kubelet[2838]: I0213 19:21:56.469669 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-ng4qf" podStartSLOduration=2.976584186 podStartE2EDuration="6.46964584s" podCreationTimestamp="2025-02-13 19:21:50 +0000 UTC" firstStartedPulling="2025-02-13 19:21:50.792555966 +0000 UTC m=+14.565654590" lastFinishedPulling="2025-02-13 19:21:54.285617608 +0000 UTC m=+18.058716244" observedRunningTime="2025-02-13 19:21:54.560210699 +0000 UTC m=+18.333309337" watchObservedRunningTime="2025-02-13 19:21:56.46964584 +0000 UTC m=+20.242744471" Feb 13 19:22:02.552349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22310076.mount: Deactivated successfully. Feb 13 19:22:05.637474 containerd[1536]: time="2025-02-13T19:22:05.637212935Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:05.639546 containerd[1536]: time="2025-02-13T19:22:05.639480291Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 19:22:05.641127 containerd[1536]: time="2025-02-13T19:22:05.640019203Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:05.644806 containerd[1536]: time="2025-02-13T19:22:05.644749497Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.358544011s" Feb 13 19:22:05.644935 containerd[1536]: time="2025-02-13T19:22:05.644820364Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 19:22:05.648093 containerd[1536]: time="2025-02-13T19:22:05.648051337Z" level=info msg="CreateContainer within sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:22:05.693727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount445944012.mount: Deactivated successfully. Feb 13 19:22:05.697879 containerd[1536]: time="2025-02-13T19:22:05.697788055Z" level=info msg="CreateContainer within sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51\"" Feb 13 19:22:05.699104 containerd[1536]: time="2025-02-13T19:22:05.698948244Z" level=info msg="StartContainer for \"2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51\"" Feb 13 19:22:05.953170 systemd[1]: Started cri-containerd-2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51.scope - libcontainer container 2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51. Feb 13 19:22:06.008792 containerd[1536]: time="2025-02-13T19:22:06.008688729Z" level=info msg="StartContainer for \"2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51\" returns successfully" Feb 13 19:22:06.028925 systemd[1]: cri-containerd-2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51.scope: Deactivated successfully. Feb 13 19:22:06.300303 containerd[1536]: time="2025-02-13T19:22:06.292742037Z" level=info msg="shim disconnected" id=2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51 namespace=k8s.io Feb 13 19:22:06.300303 containerd[1536]: time="2025-02-13T19:22:06.299982184Z" level=warning msg="cleaning up after shim disconnected" id=2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51 namespace=k8s.io Feb 13 19:22:06.300303 containerd[1536]: time="2025-02-13T19:22:06.300008862Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:22:06.318402 containerd[1536]: time="2025-02-13T19:22:06.318317870Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:22:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:22:06.572092 containerd[1536]: time="2025-02-13T19:22:06.571430866Z" level=info msg="CreateContainer within sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:22:06.588811 containerd[1536]: time="2025-02-13T19:22:06.588733414Z" level=info msg="CreateContainer within sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db\"" Feb 13 19:22:06.590467 containerd[1536]: time="2025-02-13T19:22:06.590436152Z" level=info msg="StartContainer for \"582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db\"" Feb 13 19:22:06.651020 systemd[1]: Started cri-containerd-582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db.scope - libcontainer container 582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db. Feb 13 19:22:06.689641 containerd[1536]: time="2025-02-13T19:22:06.689497632Z" level=info msg="StartContainer for \"582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db\" returns successfully" Feb 13 19:22:06.693604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51-rootfs.mount: Deactivated successfully. Feb 13 19:22:06.717579 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:22:06.718570 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:22:06.719610 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:22:06.732342 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:22:06.737097 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:22:06.737909 systemd[1]: cri-containerd-582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db.scope: Deactivated successfully. Feb 13 19:22:06.788637 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:22:06.799332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db-rootfs.mount: Deactivated successfully. Feb 13 19:22:06.805106 containerd[1536]: time="2025-02-13T19:22:06.804968772Z" level=info msg="shim disconnected" id=582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db namespace=k8s.io Feb 13 19:22:06.805393 containerd[1536]: time="2025-02-13T19:22:06.805173277Z" level=warning msg="cleaning up after shim disconnected" id=582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db namespace=k8s.io Feb 13 19:22:06.805393 containerd[1536]: time="2025-02-13T19:22:06.805193325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:22:07.598367 containerd[1536]: time="2025-02-13T19:22:07.598281029Z" level=info msg="CreateContainer within sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:22:07.640804 containerd[1536]: time="2025-02-13T19:22:07.640556816Z" level=info msg="CreateContainer within sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8\"" Feb 13 19:22:07.642589 containerd[1536]: time="2025-02-13T19:22:07.642550192Z" level=info msg="StartContainer for \"156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8\"" Feb 13 19:22:07.699074 systemd[1]: Started cri-containerd-156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8.scope - libcontainer container 156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8. Feb 13 19:22:07.752684 containerd[1536]: time="2025-02-13T19:22:07.752413446Z" level=info msg="StartContainer for \"156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8\" returns successfully" Feb 13 19:22:07.763660 systemd[1]: cri-containerd-156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8.scope: Deactivated successfully. Feb 13 19:22:07.764199 systemd[1]: cri-containerd-156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8.scope: Consumed 34ms CPU time, 7.8M memory peak, 1M read from disk. Feb 13 19:22:07.796993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8-rootfs.mount: Deactivated successfully. Feb 13 19:22:07.802374 containerd[1536]: time="2025-02-13T19:22:07.802224419Z" level=info msg="shim disconnected" id=156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8 namespace=k8s.io Feb 13 19:22:07.802535 containerd[1536]: time="2025-02-13T19:22:07.802395381Z" level=warning msg="cleaning up after shim disconnected" id=156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8 namespace=k8s.io Feb 13 19:22:07.802535 containerd[1536]: time="2025-02-13T19:22:07.802413845Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:22:08.468243 systemd[1]: Started sshd@10-10.230.68.30:22-47.237.165.45:34574.service - OpenSSH per-connection server daemon (47.237.165.45:34574). Feb 13 19:22:08.585500 containerd[1536]: time="2025-02-13T19:22:08.585451747Z" level=info msg="CreateContainer within sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:22:08.611749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount578147116.mount: Deactivated successfully. Feb 13 19:22:08.616678 containerd[1536]: time="2025-02-13T19:22:08.615466155Z" level=info msg="CreateContainer within sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22\"" Feb 13 19:22:08.618656 containerd[1536]: time="2025-02-13T19:22:08.616949648Z" level=info msg="StartContainer for \"34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22\"" Feb 13 19:22:08.663991 systemd[1]: Started cri-containerd-34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22.scope - libcontainer container 34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22. Feb 13 19:22:08.707218 systemd[1]: cri-containerd-34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22.scope: Deactivated successfully. Feb 13 19:22:08.710127 containerd[1536]: time="2025-02-13T19:22:08.709877082Z" level=info msg="StartContainer for \"34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22\" returns successfully" Feb 13 19:22:08.745161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22-rootfs.mount: Deactivated successfully. Feb 13 19:22:08.752389 containerd[1536]: time="2025-02-13T19:22:08.752278189Z" level=info msg="shim disconnected" id=34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22 namespace=k8s.io Feb 13 19:22:08.752602 containerd[1536]: time="2025-02-13T19:22:08.752396491Z" level=warning msg="cleaning up after shim disconnected" id=34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22 namespace=k8s.io Feb 13 19:22:08.752602 containerd[1536]: time="2025-02-13T19:22:08.752449860Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:22:09.431003 sshd[3464]: Connection closed by authenticating user root 47.237.165.45 port 34574 [preauth] Feb 13 19:22:09.434749 systemd[1]: sshd@10-10.230.68.30:22-47.237.165.45:34574.service: Deactivated successfully. Feb 13 19:22:09.592351 containerd[1536]: time="2025-02-13T19:22:09.592161631Z" level=info msg="CreateContainer within sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:22:09.618372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642294135.mount: Deactivated successfully. Feb 13 19:22:09.619534 containerd[1536]: time="2025-02-13T19:22:09.619345954Z" level=info msg="CreateContainer within sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\"" Feb 13 19:22:09.622563 containerd[1536]: time="2025-02-13T19:22:09.620270458Z" level=info msg="StartContainer for \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\"" Feb 13 19:22:09.690038 systemd[1]: Started cri-containerd-f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7.scope - libcontainer container f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7. Feb 13 19:22:09.733857 containerd[1536]: time="2025-02-13T19:22:09.733737151Z" level=info msg="StartContainer for \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\" returns successfully" Feb 13 19:22:09.996599 kubelet[2838]: I0213 19:22:09.996417 2838 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:22:10.050839 kubelet[2838]: I0213 19:22:10.050213 2838 topology_manager.go:215] "Topology Admit Handler" podUID="54e4a4b1-bcbf-4244-a6cf-ea1d278166f7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-phkvz" Feb 13 19:22:10.051434 kubelet[2838]: I0213 19:22:10.051406 2838 topology_manager.go:215] "Topology Admit Handler" podUID="a72e6d26-60f0-40b7-8c25-9baa6b0d3a5b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-r56g2" Feb 13 19:22:10.066335 systemd[1]: Created slice kubepods-burstable-pod54e4a4b1_bcbf_4244_a6cf_ea1d278166f7.slice - libcontainer container kubepods-burstable-pod54e4a4b1_bcbf_4244_a6cf_ea1d278166f7.slice. Feb 13 19:22:10.079863 systemd[1]: Created slice kubepods-burstable-poda72e6d26_60f0_40b7_8c25_9baa6b0d3a5b.slice - libcontainer container kubepods-burstable-poda72e6d26_60f0_40b7_8c25_9baa6b0d3a5b.slice. Feb 13 19:22:10.162515 kubelet[2838]: I0213 19:22:10.162458 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a72e6d26-60f0-40b7-8c25-9baa6b0d3a5b-config-volume\") pod \"coredns-7db6d8ff4d-r56g2\" (UID: \"a72e6d26-60f0-40b7-8c25-9baa6b0d3a5b\") " pod="kube-system/coredns-7db6d8ff4d-r56g2" Feb 13 19:22:10.162515 kubelet[2838]: I0213 19:22:10.162523 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jzdv\" (UniqueName: \"kubernetes.io/projected/a72e6d26-60f0-40b7-8c25-9baa6b0d3a5b-kube-api-access-9jzdv\") pod \"coredns-7db6d8ff4d-r56g2\" (UID: \"a72e6d26-60f0-40b7-8c25-9baa6b0d3a5b\") " pod="kube-system/coredns-7db6d8ff4d-r56g2" Feb 13 19:22:10.162988 kubelet[2838]: I0213 19:22:10.162566 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54e4a4b1-bcbf-4244-a6cf-ea1d278166f7-config-volume\") pod \"coredns-7db6d8ff4d-phkvz\" (UID: \"54e4a4b1-bcbf-4244-a6cf-ea1d278166f7\") " pod="kube-system/coredns-7db6d8ff4d-phkvz" Feb 13 19:22:10.162988 kubelet[2838]: I0213 19:22:10.162596 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqvv9\" (UniqueName: \"kubernetes.io/projected/54e4a4b1-bcbf-4244-a6cf-ea1d278166f7-kube-api-access-gqvv9\") pod \"coredns-7db6d8ff4d-phkvz\" (UID: \"54e4a4b1-bcbf-4244-a6cf-ea1d278166f7\") " pod="kube-system/coredns-7db6d8ff4d-phkvz" Feb 13 19:22:10.376146 containerd[1536]: time="2025-02-13T19:22:10.376027052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-phkvz,Uid:54e4a4b1-bcbf-4244-a6cf-ea1d278166f7,Namespace:kube-system,Attempt:0,}" Feb 13 19:22:10.386595 containerd[1536]: time="2025-02-13T19:22:10.386439658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r56g2,Uid:a72e6d26-60f0-40b7-8c25-9baa6b0d3a5b,Namespace:kube-system,Attempt:0,}" Feb 13 19:22:10.621067 kubelet[2838]: I0213 19:22:10.620969 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jnvv4" podStartSLOduration=6.281260695 podStartE2EDuration="20.620922794s" podCreationTimestamp="2025-02-13 19:21:50 +0000 UTC" firstStartedPulling="2025-02-13 19:21:51.306051752 +0000 UTC m=+15.079150384" lastFinishedPulling="2025-02-13 19:22:05.645713845 +0000 UTC m=+29.418812483" observedRunningTime="2025-02-13 19:22:10.618836476 +0000 UTC m=+34.391935136" watchObservedRunningTime="2025-02-13 19:22:10.620922794 +0000 UTC m=+34.394021432" Feb 13 19:22:12.348823 systemd-networkd[1458]: cilium_host: Link UP Feb 13 19:22:12.349111 systemd-networkd[1458]: cilium_net: Link UP Feb 13 19:22:12.351635 systemd-networkd[1458]: cilium_net: Gained carrier Feb 13 19:22:12.352393 systemd-networkd[1458]: cilium_host: Gained carrier Feb 13 19:22:12.431065 systemd-networkd[1458]: cilium_net: Gained IPv6LL Feb 13 19:22:12.511472 systemd-networkd[1458]: cilium_vxlan: Link UP Feb 13 19:22:12.511496 systemd-networkd[1458]: cilium_vxlan: Gained carrier Feb 13 19:22:12.943017 systemd-networkd[1458]: cilium_host: Gained IPv6LL Feb 13 19:22:13.081813 kernel: NET: Registered PF_ALG protocol family Feb 13 19:22:14.082718 systemd-networkd[1458]: lxc_health: Link UP Feb 13 19:22:14.088034 systemd-networkd[1458]: lxc_health: Gained carrier Feb 13 19:22:14.160072 systemd-networkd[1458]: cilium_vxlan: Gained IPv6LL Feb 13 19:22:14.509550 systemd-networkd[1458]: lxc631b079af640: Link UP Feb 13 19:22:14.518995 kernel: eth0: renamed from tmp9ab14 Feb 13 19:22:14.560131 kernel: eth0: renamed from tmp1a36c Feb 13 19:22:14.562828 systemd-networkd[1458]: lxca492b1ff037b: Link UP Feb 13 19:22:14.563355 systemd-networkd[1458]: lxc631b079af640: Gained carrier Feb 13 19:22:14.569421 systemd-networkd[1458]: lxca492b1ff037b: Gained carrier Feb 13 19:22:15.439091 systemd-networkd[1458]: lxc_health: Gained IPv6LL Feb 13 19:22:16.207131 systemd-networkd[1458]: lxc631b079af640: Gained IPv6LL Feb 13 19:22:16.399067 systemd-networkd[1458]: lxca492b1ff037b: Gained IPv6LL Feb 13 19:22:20.003069 containerd[1536]: time="2025-02-13T19:22:20.002167259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:22:20.003069 containerd[1536]: time="2025-02-13T19:22:20.002294933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:22:20.003069 containerd[1536]: time="2025-02-13T19:22:20.002333416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:20.003069 containerd[1536]: time="2025-02-13T19:22:20.002458840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:20.049984 systemd[1]: Started cri-containerd-1a36cb63c174468fc5f4b8859db4f223ae391730068f262cf5795dd9e20ecbfe.scope - libcontainer container 1a36cb63c174468fc5f4b8859db4f223ae391730068f262cf5795dd9e20ecbfe. Feb 13 19:22:20.051954 containerd[1536]: time="2025-02-13T19:22:20.051209990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:22:20.051954 containerd[1536]: time="2025-02-13T19:22:20.051391150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:22:20.051954 containerd[1536]: time="2025-02-13T19:22:20.051411349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:20.051954 containerd[1536]: time="2025-02-13T19:22:20.051726108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:20.125886 systemd[1]: Started cri-containerd-9ab147d22cfdf61ee5ac07dd42b601271d01896ca47cbf437c823d20bde1f7c6.scope - libcontainer container 9ab147d22cfdf61ee5ac07dd42b601271d01896ca47cbf437c823d20bde1f7c6. Feb 13 19:22:20.195031 containerd[1536]: time="2025-02-13T19:22:20.194936804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-phkvz,Uid:54e4a4b1-bcbf-4244-a6cf-ea1d278166f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a36cb63c174468fc5f4b8859db4f223ae391730068f262cf5795dd9e20ecbfe\"" Feb 13 19:22:20.201250 containerd[1536]: time="2025-02-13T19:22:20.201079793Z" level=info msg="CreateContainer within sandbox \"1a36cb63c174468fc5f4b8859db4f223ae391730068f262cf5795dd9e20ecbfe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:22:20.238823 containerd[1536]: time="2025-02-13T19:22:20.238453876Z" level=info msg="CreateContainer within sandbox \"1a36cb63c174468fc5f4b8859db4f223ae391730068f262cf5795dd9e20ecbfe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2f84db2b939e3f42166215a48a811958810e8df8ce01031bbc5fb2211b5400c\"" Feb 13 19:22:20.239247 containerd[1536]: time="2025-02-13T19:22:20.239149438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r56g2,Uid:a72e6d26-60f0-40b7-8c25-9baa6b0d3a5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ab147d22cfdf61ee5ac07dd42b601271d01896ca47cbf437c823d20bde1f7c6\"" Feb 13 19:22:20.240344 containerd[1536]: time="2025-02-13T19:22:20.240291524Z" level=info msg="StartContainer for \"d2f84db2b939e3f42166215a48a811958810e8df8ce01031bbc5fb2211b5400c\"" Feb 13 19:22:20.246397 containerd[1536]: time="2025-02-13T19:22:20.244905364Z" level=info msg="CreateContainer within sandbox \"9ab147d22cfdf61ee5ac07dd42b601271d01896ca47cbf437c823d20bde1f7c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:22:20.274671 containerd[1536]: time="2025-02-13T19:22:20.272955460Z" level=info msg="CreateContainer within sandbox \"9ab147d22cfdf61ee5ac07dd42b601271d01896ca47cbf437c823d20bde1f7c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03f46777d9ba07760e8bcfbf92e9da380a231ed6253e6a71aea30352d7a2cf25\"" Feb 13 19:22:20.276203 containerd[1536]: time="2025-02-13T19:22:20.276169937Z" level=info msg="StartContainer for \"03f46777d9ba07760e8bcfbf92e9da380a231ed6253e6a71aea30352d7a2cf25\"" Feb 13 19:22:20.299383 systemd[1]: Started cri-containerd-d2f84db2b939e3f42166215a48a811958810e8df8ce01031bbc5fb2211b5400c.scope - libcontainer container d2f84db2b939e3f42166215a48a811958810e8df8ce01031bbc5fb2211b5400c. Feb 13 19:22:20.332974 systemd[1]: Started cri-containerd-03f46777d9ba07760e8bcfbf92e9da380a231ed6253e6a71aea30352d7a2cf25.scope - libcontainer container 03f46777d9ba07760e8bcfbf92e9da380a231ed6253e6a71aea30352d7a2cf25. Feb 13 19:22:20.391971 containerd[1536]: time="2025-02-13T19:22:20.391521381Z" level=info msg="StartContainer for \"d2f84db2b939e3f42166215a48a811958810e8df8ce01031bbc5fb2211b5400c\" returns successfully" Feb 13 19:22:20.393277 containerd[1536]: time="2025-02-13T19:22:20.393208284Z" level=info msg="StartContainer for \"03f46777d9ba07760e8bcfbf92e9da380a231ed6253e6a71aea30352d7a2cf25\" returns successfully" Feb 13 19:22:20.666256 kubelet[2838]: I0213 19:22:20.666098 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-r56g2" podStartSLOduration=30.666077239 podStartE2EDuration="30.666077239s" podCreationTimestamp="2025-02-13 19:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:22:20.662975606 +0000 UTC m=+44.436074263" watchObservedRunningTime="2025-02-13 19:22:20.666077239 +0000 UTC m=+44.439175877" Feb 13 19:22:20.689476 kubelet[2838]: I0213 19:22:20.688409 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-phkvz" podStartSLOduration=30.688389505 podStartE2EDuration="30.688389505s" podCreationTimestamp="2025-02-13 19:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:22:20.687633272 +0000 UTC m=+44.460731926" watchObservedRunningTime="2025-02-13 19:22:20.688389505 +0000 UTC m=+44.461488151" Feb 13 19:22:21.013001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3555629561.mount: Deactivated successfully. Feb 13 19:23:24.595300 systemd[1]: Started sshd@11-10.230.68.30:22-139.178.89.65:57836.service - OpenSSH per-connection server daemon (139.178.89.65:57836). Feb 13 19:23:25.556537 sshd[4235]: Accepted publickey for core from 139.178.89.65 port 57836 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:23:25.559361 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:25.570018 systemd-logind[1514]: New session 12 of user core. Feb 13 19:23:25.581047 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:23:26.708181 sshd[4237]: Connection closed by 139.178.89.65 port 57836 Feb 13 19:23:26.708449 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:26.712717 systemd[1]: sshd@11-10.230.68.30:22-139.178.89.65:57836.service: Deactivated successfully. Feb 13 19:23:26.715549 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:23:26.717658 systemd-logind[1514]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:23:26.720192 systemd-logind[1514]: Removed session 12. Feb 13 19:23:31.876295 systemd[1]: Started sshd@12-10.230.68.30:22-139.178.89.65:49172.service - OpenSSH per-connection server daemon (139.178.89.65:49172). Feb 13 19:23:32.768797 sshd[4250]: Accepted publickey for core from 139.178.89.65 port 49172 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:23:32.771056 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:32.778256 systemd-logind[1514]: New session 13 of user core. Feb 13 19:23:32.783948 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:23:33.494299 sshd[4252]: Connection closed by 139.178.89.65 port 49172 Feb 13 19:23:33.495291 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:33.499478 systemd[1]: sshd@12-10.230.68.30:22-139.178.89.65:49172.service: Deactivated successfully. Feb 13 19:23:33.502319 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:23:33.504420 systemd-logind[1514]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:23:33.505851 systemd-logind[1514]: Removed session 13. Feb 13 19:23:38.657173 systemd[1]: Started sshd@13-10.230.68.30:22-139.178.89.65:35000.service - OpenSSH per-connection server daemon (139.178.89.65:35000). Feb 13 19:23:39.548779 sshd[4266]: Accepted publickey for core from 139.178.89.65 port 35000 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:23:39.550780 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:39.558883 systemd-logind[1514]: New session 14 of user core. Feb 13 19:23:39.571053 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:23:40.259603 sshd[4268]: Connection closed by 139.178.89.65 port 35000 Feb 13 19:23:40.261229 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:40.272998 systemd[1]: sshd@13-10.230.68.30:22-139.178.89.65:35000.service: Deactivated successfully. Feb 13 19:23:40.276569 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:23:40.278562 systemd-logind[1514]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:23:40.280575 systemd-logind[1514]: Removed session 14. Feb 13 19:23:40.422169 systemd[1]: Started sshd@14-10.230.68.30:22-139.178.89.65:35008.service - OpenSSH per-connection server daemon (139.178.89.65:35008). Feb 13 19:23:41.323929 sshd[4281]: Accepted publickey for core from 139.178.89.65 port 35008 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:23:41.325956 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:41.333728 systemd-logind[1514]: New session 15 of user core. Feb 13 19:23:41.344988 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:23:42.100996 sshd[4283]: Connection closed by 139.178.89.65 port 35008 Feb 13 19:23:42.101491 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:42.107183 systemd[1]: sshd@14-10.230.68.30:22-139.178.89.65:35008.service: Deactivated successfully. Feb 13 19:23:42.110215 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:23:42.111667 systemd-logind[1514]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:23:42.113323 systemd-logind[1514]: Removed session 15. Feb 13 19:23:42.263175 systemd[1]: Started sshd@15-10.230.68.30:22-139.178.89.65:35012.service - OpenSSH per-connection server daemon (139.178.89.65:35012). Feb 13 19:23:43.172218 sshd[4293]: Accepted publickey for core from 139.178.89.65 port 35012 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:23:43.174328 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:43.180829 systemd-logind[1514]: New session 16 of user core. Feb 13 19:23:43.191007 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:23:43.885054 sshd[4295]: Connection closed by 139.178.89.65 port 35012 Feb 13 19:23:43.886110 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:43.891826 systemd[1]: sshd@15-10.230.68.30:22-139.178.89.65:35012.service: Deactivated successfully. Feb 13 19:23:43.895329 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:23:43.896427 systemd-logind[1514]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:23:43.897683 systemd-logind[1514]: Removed session 16. Feb 13 19:23:49.047447 systemd[1]: Started sshd@16-10.230.68.30:22-139.178.89.65:47548.service - OpenSSH per-connection server daemon (139.178.89.65:47548). Feb 13 19:23:49.948240 sshd[4306]: Accepted publickey for core from 139.178.89.65 port 47548 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:23:49.950135 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:49.958998 systemd-logind[1514]: New session 17 of user core. Feb 13 19:23:49.962077 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:23:50.657822 sshd[4308]: Connection closed by 139.178.89.65 port 47548 Feb 13 19:23:50.656756 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:50.661497 systemd[1]: sshd@16-10.230.68.30:22-139.178.89.65:47548.service: Deactivated successfully. Feb 13 19:23:50.661975 systemd-logind[1514]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:23:50.665843 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:23:50.669191 systemd-logind[1514]: Removed session 17. Feb 13 19:23:55.815331 systemd[1]: Started sshd@17-10.230.68.30:22-139.178.89.65:37298.service - OpenSSH per-connection server daemon (139.178.89.65:37298). Feb 13 19:23:56.705174 sshd[4322]: Accepted publickey for core from 139.178.89.65 port 37298 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:23:56.707826 sshd-session[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:56.717087 systemd-logind[1514]: New session 18 of user core. Feb 13 19:23:56.724060 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:23:57.469403 sshd[4324]: Connection closed by 139.178.89.65 port 37298 Feb 13 19:23:57.470533 sshd-session[4322]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:57.476156 systemd[1]: sshd@17-10.230.68.30:22-139.178.89.65:37298.service: Deactivated successfully. Feb 13 19:23:57.479855 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:23:57.481041 systemd-logind[1514]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:23:57.482649 systemd-logind[1514]: Removed session 18. Feb 13 19:23:57.630219 systemd[1]: Started sshd@18-10.230.68.30:22-139.178.89.65:37308.service - OpenSSH per-connection server daemon (139.178.89.65:37308). Feb 13 19:23:58.535465 sshd[4336]: Accepted publickey for core from 139.178.89.65 port 37308 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:23:58.537539 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:58.545017 systemd-logind[1514]: New session 19 of user core. Feb 13 19:23:58.560089 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:23:59.633061 sshd[4338]: Connection closed by 139.178.89.65 port 37308 Feb 13 19:23:59.633971 sshd-session[4336]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:59.642381 systemd[1]: sshd@18-10.230.68.30:22-139.178.89.65:37308.service: Deactivated successfully. Feb 13 19:23:59.645621 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:23:59.647262 systemd-logind[1514]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:23:59.649346 systemd-logind[1514]: Removed session 19. Feb 13 19:23:59.794101 systemd[1]: Started sshd@19-10.230.68.30:22-139.178.89.65:37314.service - OpenSSH per-connection server daemon (139.178.89.65:37314). Feb 13 19:24:00.722522 sshd[4348]: Accepted publickey for core from 139.178.89.65 port 37314 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:24:00.725997 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:00.735937 systemd-logind[1514]: New session 20 of user core. Feb 13 19:24:00.741053 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:24:03.536937 sshd[4351]: Connection closed by 139.178.89.65 port 37314 Feb 13 19:24:03.539291 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:03.552629 systemd[1]: sshd@19-10.230.68.30:22-139.178.89.65:37314.service: Deactivated successfully. Feb 13 19:24:03.556326 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:24:03.557968 systemd-logind[1514]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:24:03.560269 systemd-logind[1514]: Removed session 20. Feb 13 19:24:03.702258 systemd[1]: Started sshd@20-10.230.68.30:22-139.178.89.65:37324.service - OpenSSH per-connection server daemon (139.178.89.65:37324). Feb 13 19:24:04.611399 sshd[4368]: Accepted publickey for core from 139.178.89.65 port 37324 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:24:04.613595 sshd-session[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:04.621105 systemd-logind[1514]: New session 21 of user core. Feb 13 19:24:04.626969 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:24:05.546962 sshd[4370]: Connection closed by 139.178.89.65 port 37324 Feb 13 19:24:05.547583 sshd-session[4368]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:05.552728 systemd[1]: sshd@20-10.230.68.30:22-139.178.89.65:37324.service: Deactivated successfully. Feb 13 19:24:05.556169 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:24:05.558505 systemd-logind[1514]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:24:05.560082 systemd-logind[1514]: Removed session 21. Feb 13 19:24:05.720135 systemd[1]: Started sshd@21-10.230.68.30:22-139.178.89.65:44384.service - OpenSSH per-connection server daemon (139.178.89.65:44384). Feb 13 19:24:06.616168 sshd[4379]: Accepted publickey for core from 139.178.89.65 port 44384 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:24:06.618154 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:06.625608 systemd-logind[1514]: New session 22 of user core. Feb 13 19:24:06.635992 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:24:07.328854 sshd[4381]: Connection closed by 139.178.89.65 port 44384 Feb 13 19:24:07.329893 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:07.335269 systemd[1]: sshd@21-10.230.68.30:22-139.178.89.65:44384.service: Deactivated successfully. Feb 13 19:24:07.338389 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:24:07.339574 systemd-logind[1514]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:24:07.341384 systemd-logind[1514]: Removed session 22. Feb 13 19:24:12.492241 systemd[1]: Started sshd@22-10.230.68.30:22-139.178.89.65:44394.service - OpenSSH per-connection server daemon (139.178.89.65:44394). Feb 13 19:24:13.381342 sshd[4396]: Accepted publickey for core from 139.178.89.65 port 44394 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:24:13.383638 sshd-session[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:13.391314 systemd-logind[1514]: New session 23 of user core. Feb 13 19:24:13.401082 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:24:14.089968 sshd[4398]: Connection closed by 139.178.89.65 port 44394 Feb 13 19:24:14.090912 sshd-session[4396]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:14.095108 systemd-logind[1514]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:24:14.095512 systemd[1]: sshd@22-10.230.68.30:22-139.178.89.65:44394.service: Deactivated successfully. Feb 13 19:24:14.098323 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:24:14.100987 systemd-logind[1514]: Removed session 23. Feb 13 19:24:19.252184 systemd[1]: Started sshd@23-10.230.68.30:22-139.178.89.65:44452.service - OpenSSH per-connection server daemon (139.178.89.65:44452). Feb 13 19:24:20.145078 sshd[4410]: Accepted publickey for core from 139.178.89.65 port 44452 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:24:20.147476 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:20.157248 systemd-logind[1514]: New session 24 of user core. Feb 13 19:24:20.163130 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:24:20.861786 sshd[4412]: Connection closed by 139.178.89.65 port 44452 Feb 13 19:24:20.862983 sshd-session[4410]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:20.869277 systemd[1]: sshd@23-10.230.68.30:22-139.178.89.65:44452.service: Deactivated successfully. Feb 13 19:24:20.872724 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:24:20.874636 systemd-logind[1514]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:24:20.876613 systemd-logind[1514]: Removed session 24. Feb 13 19:24:26.027296 systemd[1]: Started sshd@24-10.230.68.30:22-139.178.89.65:53740.service - OpenSSH per-connection server daemon (139.178.89.65:53740). Feb 13 19:24:26.925088 sshd[4425]: Accepted publickey for core from 139.178.89.65 port 53740 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:24:26.926561 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:26.935364 systemd-logind[1514]: New session 25 of user core. Feb 13 19:24:26.942039 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:24:27.619383 sshd[4427]: Connection closed by 139.178.89.65 port 53740 Feb 13 19:24:27.620490 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:27.625565 systemd[1]: sshd@24-10.230.68.30:22-139.178.89.65:53740.service: Deactivated successfully. Feb 13 19:24:27.628739 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:24:27.629955 systemd-logind[1514]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:24:27.632274 systemd-logind[1514]: Removed session 25. Feb 13 19:24:27.779129 systemd[1]: Started sshd@25-10.230.68.30:22-139.178.89.65:53750.service - OpenSSH per-connection server daemon (139.178.89.65:53750). Feb 13 19:24:28.682449 sshd[4439]: Accepted publickey for core from 139.178.89.65 port 53750 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:24:28.684864 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:28.693981 systemd-logind[1514]: New session 26 of user core. Feb 13 19:24:28.701151 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:24:30.952680 containerd[1536]: time="2025-02-13T19:24:30.952470285Z" level=info msg="StopContainer for \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\" with timeout 30 (s)" Feb 13 19:24:30.964705 containerd[1536]: time="2025-02-13T19:24:30.964103412Z" level=info msg="Stop container \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\" with signal terminated" Feb 13 19:24:31.022036 systemd[1]: cri-containerd-3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d.scope: Deactivated successfully. Feb 13 19:24:31.023249 systemd[1]: cri-containerd-3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d.scope: Consumed 600ms CPU time, 31.4M memory peak, 9.2M read from disk, 4K written to disk. Feb 13 19:24:31.050008 containerd[1536]: time="2025-02-13T19:24:31.049915353Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:24:31.059691 containerd[1536]: time="2025-02-13T19:24:31.059469693Z" level=info msg="StopContainer for \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\" with timeout 2 (s)" Feb 13 19:24:31.060190 containerd[1536]: time="2025-02-13T19:24:31.060084133Z" level=info msg="Stop container \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\" with signal terminated" Feb 13 19:24:31.088941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d-rootfs.mount: Deactivated successfully. Feb 13 19:24:31.094746 systemd-networkd[1458]: lxc_health: Link DOWN Feb 13 19:24:31.094774 systemd-networkd[1458]: lxc_health: Lost carrier Feb 13 19:24:31.100729 containerd[1536]: time="2025-02-13T19:24:31.100563060Z" level=info msg="shim disconnected" id=3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d namespace=k8s.io Feb 13 19:24:31.101233 containerd[1536]: time="2025-02-13T19:24:31.100735992Z" level=warning msg="cleaning up after shim disconnected" id=3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d namespace=k8s.io Feb 13 19:24:31.101684 containerd[1536]: time="2025-02-13T19:24:31.101639573Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:24:31.121329 systemd[1]: cri-containerd-f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7.scope: Deactivated successfully. Feb 13 19:24:31.122700 systemd[1]: cri-containerd-f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7.scope: Consumed 10.007s CPU time, 194.2M memory peak, 69.6M read from disk, 13.3M written to disk. Feb 13 19:24:31.145811 containerd[1536]: time="2025-02-13T19:24:31.145520246Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:24:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:24:31.149407 containerd[1536]: time="2025-02-13T19:24:31.149373093Z" level=info msg="StopContainer for \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\" returns successfully" Feb 13 19:24:31.151069 containerd[1536]: time="2025-02-13T19:24:31.150973804Z" level=info msg="StopPodSandbox for \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\"" Feb 13 19:24:31.160411 containerd[1536]: time="2025-02-13T19:24:31.153078211Z" level=info msg="Container to stop \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:24:31.164511 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec-shm.mount: Deactivated successfully. Feb 13 19:24:31.168988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7-rootfs.mount: Deactivated successfully. Feb 13 19:24:31.183830 containerd[1536]: time="2025-02-13T19:24:31.183692211Z" level=info msg="shim disconnected" id=f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7 namespace=k8s.io Feb 13 19:24:31.184428 containerd[1536]: time="2025-02-13T19:24:31.183895610Z" level=warning msg="cleaning up after shim disconnected" id=f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7 namespace=k8s.io Feb 13 19:24:31.184428 containerd[1536]: time="2025-02-13T19:24:31.183914784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:24:31.187421 systemd[1]: cri-containerd-1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec.scope: Deactivated successfully. Feb 13 19:24:31.222990 containerd[1536]: time="2025-02-13T19:24:31.221701301Z" level=info msg="StopContainer for \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\" returns successfully" Feb 13 19:24:31.222990 containerd[1536]: time="2025-02-13T19:24:31.222391856Z" level=info msg="StopPodSandbox for \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\"" Feb 13 19:24:31.222990 containerd[1536]: time="2025-02-13T19:24:31.222429495Z" level=info msg="Container to stop \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:24:31.222990 containerd[1536]: time="2025-02-13T19:24:31.222470818Z" level=info msg="Container to stop \"34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:24:31.222990 containerd[1536]: time="2025-02-13T19:24:31.222485938Z" level=info msg="Container to stop \"156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:24:31.222990 containerd[1536]: time="2025-02-13T19:24:31.222518949Z" level=info msg="Container to stop \"2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:24:31.222990 containerd[1536]: time="2025-02-13T19:24:31.222542009Z" level=info msg="Container to stop \"582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:24:31.228831 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535-shm.mount: Deactivated successfully. Feb 13 19:24:31.243397 systemd[1]: cri-containerd-da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535.scope: Deactivated successfully. Feb 13 19:24:31.252635 containerd[1536]: time="2025-02-13T19:24:31.252547436Z" level=info msg="shim disconnected" id=1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec namespace=k8s.io Feb 13 19:24:31.252635 containerd[1536]: time="2025-02-13T19:24:31.252613500Z" level=warning msg="cleaning up after shim disconnected" id=1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec namespace=k8s.io Feb 13 19:24:31.252635 containerd[1536]: time="2025-02-13T19:24:31.252628604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:24:31.274849 containerd[1536]: time="2025-02-13T19:24:31.274716993Z" level=info msg="TearDown network for sandbox \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\" successfully" Feb 13 19:24:31.274849 containerd[1536]: time="2025-02-13T19:24:31.274809424Z" level=info msg="StopPodSandbox for \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\" returns successfully" Feb 13 19:24:31.297445 containerd[1536]: time="2025-02-13T19:24:31.296187259Z" level=info msg="shim disconnected" id=da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535 namespace=k8s.io Feb 13 19:24:31.297445 containerd[1536]: time="2025-02-13T19:24:31.296248589Z" level=warning msg="cleaning up after shim disconnected" id=da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535 namespace=k8s.io Feb 13 19:24:31.297445 containerd[1536]: time="2025-02-13T19:24:31.296261714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:24:31.322818 containerd[1536]: time="2025-02-13T19:24:31.322677242Z" level=info msg="TearDown network for sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" successfully" Feb 13 19:24:31.323176 containerd[1536]: time="2025-02-13T19:24:31.323026780Z" level=info msg="StopPodSandbox for \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" returns successfully" Feb 13 19:24:31.403087 kubelet[2838]: I0213 19:24:31.402584 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74pkj\" (UniqueName: \"kubernetes.io/projected/9407df4b-472a-45a6-82d6-a6c17818fd9b-kube-api-access-74pkj\") pod \"9407df4b-472a-45a6-82d6-a6c17818fd9b\" (UID: \"9407df4b-472a-45a6-82d6-a6c17818fd9b\") " Feb 13 19:24:31.403087 kubelet[2838]: I0213 19:24:31.402704 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9407df4b-472a-45a6-82d6-a6c17818fd9b-cilium-config-path\") pod \"9407df4b-472a-45a6-82d6-a6c17818fd9b\" (UID: \"9407df4b-472a-45a6-82d6-a6c17818fd9b\") " Feb 13 19:24:31.418032 kubelet[2838]: I0213 19:24:31.416525 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9407df4b-472a-45a6-82d6-a6c17818fd9b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9407df4b-472a-45a6-82d6-a6c17818fd9b" (UID: "9407df4b-472a-45a6-82d6-a6c17818fd9b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:24:31.431749 kubelet[2838]: I0213 19:24:31.431624 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9407df4b-472a-45a6-82d6-a6c17818fd9b-kube-api-access-74pkj" (OuterVolumeSpecName: "kube-api-access-74pkj") pod "9407df4b-472a-45a6-82d6-a6c17818fd9b" (UID: "9407df4b-472a-45a6-82d6-a6c17818fd9b"). InnerVolumeSpecName "kube-api-access-74pkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:24:31.505216 kubelet[2838]: I0213 19:24:31.503880 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-clustermesh-secrets\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.505216 kubelet[2838]: I0213 19:24:31.503942 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-bpf-maps\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.505216 kubelet[2838]: I0213 19:24:31.503970 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-lib-modules\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.505216 kubelet[2838]: I0213 19:24:31.504004 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-host-proc-sys-net\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.505216 kubelet[2838]: I0213 19:24:31.504030 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cilium-cgroup\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.505216 kubelet[2838]: I0213 19:24:31.504063 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cilium-config-path\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.505665 kubelet[2838]: I0213 19:24:31.504101 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-etc-cni-netd\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.505665 kubelet[2838]: I0213 19:24:31.504128 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-xtables-lock\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.505665 kubelet[2838]: I0213 19:24:31.504170 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-host-proc-sys-kernel\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.505665 kubelet[2838]: I0213 19:24:31.504201 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hfkg\" (UniqueName: \"kubernetes.io/projected/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-kube-api-access-8hfkg\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.505665 kubelet[2838]: I0213 19:24:31.504234 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-hostproc\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.505665 kubelet[2838]: I0213 19:24:31.504261 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-hubble-tls\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.506041 kubelet[2838]: I0213 19:24:31.504285 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cilium-run\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.506041 kubelet[2838]: I0213 19:24:31.504325 2838 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cni-path\") pod \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\" (UID: \"94c62ab6-ca3c-4883-8aa8-b9c674c0c22f\") " Feb 13 19:24:31.508507 kubelet[2838]: I0213 19:24:31.506440 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:24:31.508507 kubelet[2838]: I0213 19:24:31.506513 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:24:31.508507 kubelet[2838]: I0213 19:24:31.506546 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:24:31.508507 kubelet[2838]: I0213 19:24:31.506578 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:24:31.508507 kubelet[2838]: I0213 19:24:31.506620 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:24:31.508969 kubelet[2838]: I0213 19:24:31.508939 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:24:31.509472 kubelet[2838]: I0213 19:24:31.509439 2838 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-74pkj\" (UniqueName: \"kubernetes.io/projected/9407df4b-472a-45a6-82d6-a6c17818fd9b-kube-api-access-74pkj\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.509575 kubelet[2838]: I0213 19:24:31.509493 2838 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9407df4b-472a-45a6-82d6-a6c17818fd9b-cilium-config-path\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.509575 kubelet[2838]: I0213 19:24:31.509534 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cni-path" (OuterVolumeSpecName: "cni-path") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:24:31.509701 kubelet[2838]: I0213 19:24:31.509592 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-hostproc" (OuterVolumeSpecName: "hostproc") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:24:31.510542 kubelet[2838]: I0213 19:24:31.510501 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:24:31.510616 kubelet[2838]: I0213 19:24:31.510560 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:24:31.510616 kubelet[2838]: I0213 19:24:31.510595 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:24:31.514077 kubelet[2838]: I0213 19:24:31.514022 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:24:31.514541 kubelet[2838]: I0213 19:24:31.514503 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:24:31.515375 kubelet[2838]: I0213 19:24:31.515342 2838 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-kube-api-access-8hfkg" (OuterVolumeSpecName: "kube-api-access-8hfkg") pod "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" (UID: "94c62ab6-ca3c-4883-8aa8-b9c674c0c22f"). InnerVolumeSpecName "kube-api-access-8hfkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:24:31.610571 kubelet[2838]: I0213 19:24:31.609805 2838 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-host-proc-sys-kernel\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.610841 kubelet[2838]: I0213 19:24:31.610679 2838 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8hfkg\" (UniqueName: \"kubernetes.io/projected/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-kube-api-access-8hfkg\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.610841 kubelet[2838]: I0213 19:24:31.610712 2838 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-hostproc\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.610841 kubelet[2838]: I0213 19:24:31.610734 2838 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-hubble-tls\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.610841 kubelet[2838]: I0213 19:24:31.610748 2838 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cilium-run\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.610841 kubelet[2838]: I0213 19:24:31.610783 2838 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cni-path\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.610841 kubelet[2838]: I0213 19:24:31.610803 2838 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-clustermesh-secrets\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.610841 kubelet[2838]: I0213 19:24:31.610833 2838 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-bpf-maps\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.611177 kubelet[2838]: I0213 19:24:31.610865 2838 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-lib-modules\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.611177 kubelet[2838]: I0213 19:24:31.610883 2838 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-host-proc-sys-net\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.611177 kubelet[2838]: I0213 19:24:31.610898 2838 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cilium-cgroup\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.611177 kubelet[2838]: I0213 19:24:31.610912 2838 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-cilium-config-path\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.613124 kubelet[2838]: I0213 19:24:31.613097 2838 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-etc-cni-netd\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.613224 kubelet[2838]: I0213 19:24:31.613131 2838 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f-xtables-lock\") on node \"srv-g6z5b.gb1.brightbox.com\" DevicePath \"\"" Feb 13 19:24:31.719276 kubelet[2838]: E0213 19:24:31.712784 2838 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:24:32.004143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535-rootfs.mount: Deactivated successfully. Feb 13 19:24:32.004426 systemd[1]: var-lib-kubelet-pods-94c62ab6\x2dca3c\x2d4883\x2d8aa8\x2db9c674c0c22f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:24:32.004589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec-rootfs.mount: Deactivated successfully. Feb 13 19:24:32.006067 systemd[1]: var-lib-kubelet-pods-9407df4b\x2d472a\x2d45a6\x2d82d6\x2da6c17818fd9b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74pkj.mount: Deactivated successfully. Feb 13 19:24:32.006355 systemd[1]: var-lib-kubelet-pods-94c62ab6\x2dca3c\x2d4883\x2d8aa8\x2db9c674c0c22f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8hfkg.mount: Deactivated successfully. Feb 13 19:24:32.006500 systemd[1]: var-lib-kubelet-pods-94c62ab6\x2dca3c\x2d4883\x2d8aa8\x2db9c674c0c22f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:24:32.031853 kubelet[2838]: I0213 19:24:32.031797 2838 scope.go:117] "RemoveContainer" containerID="f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7" Feb 13 19:24:32.051417 systemd[1]: Removed slice kubepods-burstable-pod94c62ab6_ca3c_4883_8aa8_b9c674c0c22f.slice - libcontainer container kubepods-burstable-pod94c62ab6_ca3c_4883_8aa8_b9c674c0c22f.slice. Feb 13 19:24:32.051565 systemd[1]: kubepods-burstable-pod94c62ab6_ca3c_4883_8aa8_b9c674c0c22f.slice: Consumed 10.137s CPU time, 194.6M memory peak, 70.6M read from disk, 13.3M written to disk. Feb 13 19:24:32.053509 containerd[1536]: time="2025-02-13T19:24:32.053176387Z" level=info msg="RemoveContainer for \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\"" Feb 13 19:24:32.056948 systemd[1]: Removed slice kubepods-besteffort-pod9407df4b_472a_45a6_82d6_a6c17818fd9b.slice - libcontainer container kubepods-besteffort-pod9407df4b_472a_45a6_82d6_a6c17818fd9b.slice. Feb 13 19:24:32.057096 systemd[1]: kubepods-besteffort-pod9407df4b_472a_45a6_82d6_a6c17818fd9b.slice: Consumed 636ms CPU time, 31.7M memory peak, 9.2M read from disk, 4K written to disk. Feb 13 19:24:32.062635 containerd[1536]: time="2025-02-13T19:24:32.062597221Z" level=info msg="RemoveContainer for \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\" returns successfully" Feb 13 19:24:32.063519 kubelet[2838]: I0213 19:24:32.063315 2838 scope.go:117] "RemoveContainer" containerID="34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22" Feb 13 19:24:32.064952 containerd[1536]: time="2025-02-13T19:24:32.064918939Z" level=info msg="RemoveContainer for \"34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22\"" Feb 13 19:24:32.069246 containerd[1536]: time="2025-02-13T19:24:32.069138672Z" level=info msg="RemoveContainer for \"34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22\" returns successfully" Feb 13 19:24:32.069618 kubelet[2838]: I0213 19:24:32.069508 2838 scope.go:117] "RemoveContainer" containerID="156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8" Feb 13 19:24:32.072099 containerd[1536]: time="2025-02-13T19:24:32.072056762Z" level=info msg="RemoveContainer for \"156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8\"" Feb 13 19:24:32.078030 containerd[1536]: time="2025-02-13T19:24:32.077982746Z" level=info msg="RemoveContainer for \"156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8\" returns successfully" Feb 13 19:24:32.078425 kubelet[2838]: I0213 19:24:32.078294 2838 scope.go:117] "RemoveContainer" containerID="582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db" Feb 13 19:24:32.083070 containerd[1536]: time="2025-02-13T19:24:32.082545058Z" level=info msg="RemoveContainer for \"582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db\"" Feb 13 19:24:32.086570 containerd[1536]: time="2025-02-13T19:24:32.086525948Z" level=info msg="RemoveContainer for \"582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db\" returns successfully" Feb 13 19:24:32.088074 containerd[1536]: time="2025-02-13T19:24:32.087884459Z" level=info msg="RemoveContainer for \"2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51\"" Feb 13 19:24:32.088160 kubelet[2838]: I0213 19:24:32.086723 2838 scope.go:117] "RemoveContainer" containerID="2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51" Feb 13 19:24:32.092300 containerd[1536]: time="2025-02-13T19:24:32.092261651Z" level=info msg="RemoveContainer for \"2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51\" returns successfully" Feb 13 19:24:32.093927 kubelet[2838]: I0213 19:24:32.093175 2838 scope.go:117] "RemoveContainer" containerID="f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7" Feb 13 19:24:32.094196 containerd[1536]: time="2025-02-13T19:24:32.094136976Z" level=error msg="ContainerStatus for \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\": not found" Feb 13 19:24:32.095713 kubelet[2838]: E0213 19:24:32.095362 2838 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\": not found" containerID="f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7" Feb 13 19:24:32.095713 kubelet[2838]: I0213 19:24:32.095429 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7"} err="failed to get container status \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4618b62fc782654f9182804168b673a60960b2bfbc5190606de2ec3e52448a7\": not found" Feb 13 19:24:32.095713 kubelet[2838]: I0213 19:24:32.095544 2838 scope.go:117] "RemoveContainer" containerID="34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22" Feb 13 19:24:32.096374 containerd[1536]: time="2025-02-13T19:24:32.096093841Z" level=error msg="ContainerStatus for \"34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22\": not found" Feb 13 19:24:32.097876 kubelet[2838]: E0213 19:24:32.097826 2838 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22\": not found" containerID="34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22" Feb 13 19:24:32.098075 kubelet[2838]: I0213 19:24:32.097984 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22"} err="failed to get container status \"34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22\": rpc error: code = NotFound desc = an error occurred when try to find container \"34b6dae209d45942f1c306afc7881074322ac27d66bf4651fda7181a8c1c2e22\": not found" Feb 13 19:24:32.098075 kubelet[2838]: I0213 19:24:32.098011 2838 scope.go:117] "RemoveContainer" containerID="156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8" Feb 13 19:24:32.098687 containerd[1536]: time="2025-02-13T19:24:32.098405229Z" level=error msg="ContainerStatus for \"156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8\": not found" Feb 13 19:24:32.098898 kubelet[2838]: E0213 19:24:32.098580 2838 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8\": not found" containerID="156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8" Feb 13 19:24:32.098898 kubelet[2838]: I0213 19:24:32.098627 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8"} err="failed to get container status \"156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"156c903550fe58a2011d3eaa0dd1da1fa6a011ce34e0bd80c0e9fa1f6aec0bc8\": not found" Feb 13 19:24:32.099401 kubelet[2838]: I0213 19:24:32.099041 2838 scope.go:117] "RemoveContainer" containerID="582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db" Feb 13 19:24:32.099603 containerd[1536]: time="2025-02-13T19:24:32.099343366Z" level=error msg="ContainerStatus for \"582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db\": not found" Feb 13 19:24:32.099922 kubelet[2838]: E0213 19:24:32.099790 2838 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db\": not found" containerID="582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db" Feb 13 19:24:32.099922 kubelet[2838]: I0213 19:24:32.099824 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db"} err="failed to get container status \"582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db\": rpc error: code = NotFound desc = an error occurred when try to find container \"582a9a095ae612aaa5ab8cfdb73543e0d4bab2d1df2a8e5bd32191b95f1696db\": not found" Feb 13 19:24:32.099922 kubelet[2838]: I0213 19:24:32.099846 2838 scope.go:117] "RemoveContainer" containerID="2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51" Feb 13 19:24:32.100993 containerd[1536]: time="2025-02-13T19:24:32.100231837Z" level=error msg="ContainerStatus for \"2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51\": not found" Feb 13 19:24:32.101070 kubelet[2838]: E0213 19:24:32.100923 2838 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51\": not found" containerID="2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51" Feb 13 19:24:32.101070 kubelet[2838]: I0213 19:24:32.100953 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51"} err="failed to get container status \"2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b26b4e48666e9b47d77a967cdf47ae8ebecd4b692b61576a187fb0f169aac51\": not found" Feb 13 19:24:32.101887 kubelet[2838]: I0213 19:24:32.100975 2838 scope.go:117] "RemoveContainer" containerID="3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d" Feb 13 19:24:32.104610 containerd[1536]: time="2025-02-13T19:24:32.104518759Z" level=info msg="RemoveContainer for \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\"" Feb 13 19:24:32.109494 containerd[1536]: time="2025-02-13T19:24:32.109306566Z" level=info msg="RemoveContainer for \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\" returns successfully" Feb 13 19:24:32.109585 kubelet[2838]: I0213 19:24:32.109494 2838 scope.go:117] "RemoveContainer" containerID="3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d" Feb 13 19:24:32.109933 containerd[1536]: time="2025-02-13T19:24:32.109887134Z" level=error msg="ContainerStatus for \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\": not found" Feb 13 19:24:32.110197 kubelet[2838]: E0213 19:24:32.110166 2838 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\": not found" containerID="3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d" Feb 13 19:24:32.110263 kubelet[2838]: I0213 19:24:32.110202 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d"} err="failed to get container status \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cef296b2b9488ef704217515d5bc7939ecb9ca95f89b2995914ec5da8737e8d\": not found" Feb 13 19:24:32.428434 kubelet[2838]: I0213 19:24:32.428344 2838 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9407df4b-472a-45a6-82d6-a6c17818fd9b" path="/var/lib/kubelet/pods/9407df4b-472a-45a6-82d6-a6c17818fd9b/volumes" Feb 13 19:24:32.429565 kubelet[2838]: I0213 19:24:32.429512 2838 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" path="/var/lib/kubelet/pods/94c62ab6-ca3c-4883-8aa8-b9c674c0c22f/volumes" Feb 13 19:24:32.954828 sshd[4441]: Connection closed by 139.178.89.65 port 53750 Feb 13 19:24:32.955981 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:32.962530 systemd[1]: sshd@25-10.230.68.30:22-139.178.89.65:53750.service: Deactivated successfully. Feb 13 19:24:32.965184 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:24:32.965543 systemd[1]: session-26.scope: Consumed 1.064s CPU time, 26.6M memory peak. Feb 13 19:24:32.966509 systemd-logind[1514]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:24:32.969227 systemd-logind[1514]: Removed session 26. Feb 13 19:24:33.114174 systemd[1]: Started sshd@26-10.230.68.30:22-139.178.89.65:53766.service - OpenSSH per-connection server daemon (139.178.89.65:53766). Feb 13 19:24:34.039832 sshd[4597]: Accepted publickey for core from 139.178.89.65 port 53766 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:24:34.041786 sshd-session[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:34.050967 systemd-logind[1514]: New session 27 of user core. Feb 13 19:24:34.057181 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:24:35.256814 kubelet[2838]: I0213 19:24:35.251582 2838 topology_manager.go:215] "Topology Admit Handler" podUID="26fbc673-1e0e-4261-b08e-4886e215f49e" podNamespace="kube-system" podName="cilium-2nh4t" Feb 13 19:24:35.260721 kubelet[2838]: E0213 19:24:35.260675 2838 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9407df4b-472a-45a6-82d6-a6c17818fd9b" containerName="cilium-operator" Feb 13 19:24:35.260721 kubelet[2838]: E0213 19:24:35.260718 2838 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" containerName="mount-bpf-fs" Feb 13 19:24:35.260898 kubelet[2838]: E0213 19:24:35.260733 2838 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" containerName="apply-sysctl-overwrites" Feb 13 19:24:35.260898 kubelet[2838]: E0213 19:24:35.260745 2838 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" containerName="clean-cilium-state" Feb 13 19:24:35.260898 kubelet[2838]: E0213 19:24:35.260755 2838 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" containerName="cilium-agent" Feb 13 19:24:35.260898 kubelet[2838]: E0213 19:24:35.260782 2838 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" containerName="mount-cgroup" Feb 13 19:24:35.265852 kubelet[2838]: I0213 19:24:35.260875 2838 memory_manager.go:354] "RemoveStaleState removing state" podUID="9407df4b-472a-45a6-82d6-a6c17818fd9b" containerName="cilium-operator" Feb 13 19:24:35.265852 kubelet[2838]: I0213 19:24:35.265848 2838 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c62ab6-ca3c-4883-8aa8-b9c674c0c22f" containerName="cilium-agent" Feb 13 19:24:35.314811 systemd[1]: Created slice kubepods-burstable-pod26fbc673_1e0e_4261_b08e_4886e215f49e.slice - libcontainer container kubepods-burstable-pod26fbc673_1e0e_4261_b08e_4886e215f49e.slice. Feb 13 19:24:35.366163 sshd[4599]: Connection closed by 139.178.89.65 port 53766 Feb 13 19:24:35.365136 sshd-session[4597]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:35.370521 systemd[1]: sshd@26-10.230.68.30:22-139.178.89.65:53766.service: Deactivated successfully. Feb 13 19:24:35.374866 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:24:35.378441 systemd-logind[1514]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:24:35.380900 systemd-logind[1514]: Removed session 27. Feb 13 19:24:35.452073 kubelet[2838]: I0213 19:24:35.451419 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/26fbc673-1e0e-4261-b08e-4886e215f49e-hubble-tls\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452073 kubelet[2838]: I0213 19:24:35.451503 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dks4q\" (UniqueName: \"kubernetes.io/projected/26fbc673-1e0e-4261-b08e-4886e215f49e-kube-api-access-dks4q\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452073 kubelet[2838]: I0213 19:24:35.451549 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/26fbc673-1e0e-4261-b08e-4886e215f49e-cilium-run\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452073 kubelet[2838]: I0213 19:24:35.451615 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26fbc673-1e0e-4261-b08e-4886e215f49e-etc-cni-netd\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452073 kubelet[2838]: I0213 19:24:35.451649 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26fbc673-1e0e-4261-b08e-4886e215f49e-lib-modules\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452073 kubelet[2838]: I0213 19:24:35.451700 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/26fbc673-1e0e-4261-b08e-4886e215f49e-host-proc-sys-net\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452641 kubelet[2838]: I0213 19:24:35.451730 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26fbc673-1e0e-4261-b08e-4886e215f49e-cilium-config-path\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452641 kubelet[2838]: I0213 19:24:35.451755 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/26fbc673-1e0e-4261-b08e-4886e215f49e-host-proc-sys-kernel\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452641 kubelet[2838]: I0213 19:24:35.451812 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/26fbc673-1e0e-4261-b08e-4886e215f49e-cilium-cgroup\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452641 kubelet[2838]: I0213 19:24:35.451840 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26fbc673-1e0e-4261-b08e-4886e215f49e-xtables-lock\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452641 kubelet[2838]: I0213 19:24:35.451864 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/26fbc673-1e0e-4261-b08e-4886e215f49e-hostproc\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452641 kubelet[2838]: I0213 19:24:35.451913 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/26fbc673-1e0e-4261-b08e-4886e215f49e-cni-path\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452967 kubelet[2838]: I0213 19:24:35.451943 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/26fbc673-1e0e-4261-b08e-4886e215f49e-clustermesh-secrets\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452967 kubelet[2838]: I0213 19:24:35.451979 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/26fbc673-1e0e-4261-b08e-4886e215f49e-bpf-maps\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.452967 kubelet[2838]: I0213 19:24:35.452007 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/26fbc673-1e0e-4261-b08e-4886e215f49e-cilium-ipsec-secrets\") pod \"cilium-2nh4t\" (UID: \"26fbc673-1e0e-4261-b08e-4886e215f49e\") " pod="kube-system/cilium-2nh4t" Feb 13 19:24:35.521458 systemd[1]: Started sshd@27-10.230.68.30:22-139.178.89.65:57974.service - OpenSSH per-connection server daemon (139.178.89.65:57974). Feb 13 19:24:35.623210 containerd[1536]: time="2025-02-13T19:24:35.623086966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2nh4t,Uid:26fbc673-1e0e-4261-b08e-4886e215f49e,Namespace:kube-system,Attempt:0,}" Feb 13 19:24:35.673484 containerd[1536]: time="2025-02-13T19:24:35.673149370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:24:35.673484 containerd[1536]: time="2025-02-13T19:24:35.673408228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:24:35.674092 containerd[1536]: time="2025-02-13T19:24:35.673490860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:24:35.674092 containerd[1536]: time="2025-02-13T19:24:35.673716977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:24:35.704050 systemd[1]: Started cri-containerd-ca2ae2c0a5fbe8b1cdfb085b70971572c14bf4afc91b20ab6acdcbb6e7354f09.scope - libcontainer container ca2ae2c0a5fbe8b1cdfb085b70971572c14bf4afc91b20ab6acdcbb6e7354f09. Feb 13 19:24:35.750587 containerd[1536]: time="2025-02-13T19:24:35.750514682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2nh4t,Uid:26fbc673-1e0e-4261-b08e-4886e215f49e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca2ae2c0a5fbe8b1cdfb085b70971572c14bf4afc91b20ab6acdcbb6e7354f09\"" Feb 13 19:24:35.758359 containerd[1536]: time="2025-02-13T19:24:35.758303367Z" level=info msg="CreateContainer within sandbox \"ca2ae2c0a5fbe8b1cdfb085b70971572c14bf4afc91b20ab6acdcbb6e7354f09\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:24:35.773804 containerd[1536]: time="2025-02-13T19:24:35.773315569Z" level=info msg="CreateContainer within sandbox \"ca2ae2c0a5fbe8b1cdfb085b70971572c14bf4afc91b20ab6acdcbb6e7354f09\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"12a04d89b4682858dc359c814be40fd21c3b5ea3bfe66e0278495fa6a743d4b6\"" Feb 13 19:24:35.775799 containerd[1536]: time="2025-02-13T19:24:35.775009345Z" level=info msg="StartContainer for \"12a04d89b4682858dc359c814be40fd21c3b5ea3bfe66e0278495fa6a743d4b6\"" Feb 13 19:24:35.820160 systemd[1]: Started cri-containerd-12a04d89b4682858dc359c814be40fd21c3b5ea3bfe66e0278495fa6a743d4b6.scope - libcontainer container 12a04d89b4682858dc359c814be40fd21c3b5ea3bfe66e0278495fa6a743d4b6. Feb 13 19:24:35.868919 containerd[1536]: time="2025-02-13T19:24:35.868849847Z" level=info msg="StartContainer for \"12a04d89b4682858dc359c814be40fd21c3b5ea3bfe66e0278495fa6a743d4b6\" returns successfully" Feb 13 19:24:35.889063 systemd[1]: cri-containerd-12a04d89b4682858dc359c814be40fd21c3b5ea3bfe66e0278495fa6a743d4b6.scope: Deactivated successfully. Feb 13 19:24:35.954000 containerd[1536]: time="2025-02-13T19:24:35.953821174Z" level=info msg="shim disconnected" id=12a04d89b4682858dc359c814be40fd21c3b5ea3bfe66e0278495fa6a743d4b6 namespace=k8s.io Feb 13 19:24:35.954000 containerd[1536]: time="2025-02-13T19:24:35.953982035Z" level=warning msg="cleaning up after shim disconnected" id=12a04d89b4682858dc359c814be40fd21c3b5ea3bfe66e0278495fa6a743d4b6 namespace=k8s.io Feb 13 19:24:35.954000 containerd[1536]: time="2025-02-13T19:24:35.954009175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:24:36.073094 containerd[1536]: time="2025-02-13T19:24:36.072622097Z" level=info msg="CreateContainer within sandbox \"ca2ae2c0a5fbe8b1cdfb085b70971572c14bf4afc91b20ab6acdcbb6e7354f09\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:24:36.089163 containerd[1536]: time="2025-02-13T19:24:36.089073212Z" level=info msg="CreateContainer within sandbox \"ca2ae2c0a5fbe8b1cdfb085b70971572c14bf4afc91b20ab6acdcbb6e7354f09\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"552e56b279fd10d356093cd83bd28a904e216cb216c5bd3c554f6c83a13c80cb\"" Feb 13 19:24:36.090262 containerd[1536]: time="2025-02-13T19:24:36.090189806Z" level=info msg="StartContainer for \"552e56b279fd10d356093cd83bd28a904e216cb216c5bd3c554f6c83a13c80cb\"" Feb 13 19:24:36.147009 systemd[1]: Started cri-containerd-552e56b279fd10d356093cd83bd28a904e216cb216c5bd3c554f6c83a13c80cb.scope - libcontainer container 552e56b279fd10d356093cd83bd28a904e216cb216c5bd3c554f6c83a13c80cb. Feb 13 19:24:36.191549 containerd[1536]: time="2025-02-13T19:24:36.191348178Z" level=info msg="StartContainer for \"552e56b279fd10d356093cd83bd28a904e216cb216c5bd3c554f6c83a13c80cb\" returns successfully" Feb 13 19:24:36.205648 systemd[1]: cri-containerd-552e56b279fd10d356093cd83bd28a904e216cb216c5bd3c554f6c83a13c80cb.scope: Deactivated successfully. Feb 13 19:24:36.248220 containerd[1536]: time="2025-02-13T19:24:36.247807619Z" level=info msg="shim disconnected" id=552e56b279fd10d356093cd83bd28a904e216cb216c5bd3c554f6c83a13c80cb namespace=k8s.io Feb 13 19:24:36.248220 containerd[1536]: time="2025-02-13T19:24:36.247880865Z" level=warning msg="cleaning up after shim disconnected" id=552e56b279fd10d356093cd83bd28a904e216cb216c5bd3c554f6c83a13c80cb namespace=k8s.io Feb 13 19:24:36.248220 containerd[1536]: time="2025-02-13T19:24:36.247896285Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:24:36.275221 containerd[1536]: time="2025-02-13T19:24:36.275145023Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:24:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:24:36.431076 sshd[4611]: Accepted publickey for core from 139.178.89.65 port 57974 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:24:36.434267 sshd-session[4611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:36.438959 containerd[1536]: time="2025-02-13T19:24:36.438604186Z" level=info msg="StopPodSandbox for \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\"" Feb 13 19:24:36.439394 containerd[1536]: time="2025-02-13T19:24:36.439289451Z" level=info msg="TearDown network for sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" successfully" Feb 13 19:24:36.439682 containerd[1536]: time="2025-02-13T19:24:36.439529975Z" level=info msg="StopPodSandbox for \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" returns successfully" Feb 13 19:24:36.440951 containerd[1536]: time="2025-02-13T19:24:36.440726421Z" level=info msg="RemovePodSandbox for \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\"" Feb 13 19:24:36.440951 containerd[1536]: time="2025-02-13T19:24:36.440812301Z" level=info msg="Forcibly stopping sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\"" Feb 13 19:24:36.441272 containerd[1536]: time="2025-02-13T19:24:36.440909935Z" level=info msg="TearDown network for sandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" successfully" Feb 13 19:24:36.445616 systemd-logind[1514]: New session 28 of user core. Feb 13 19:24:36.451224 containerd[1536]: time="2025-02-13T19:24:36.451154603Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:24:36.451349 containerd[1536]: time="2025-02-13T19:24:36.451251355Z" level=info msg="RemovePodSandbox \"da67a2fed07ea66407313403a631d5ff749f7193932a7e883ff9643d904b1535\" returns successfully" Feb 13 19:24:36.452051 containerd[1536]: time="2025-02-13T19:24:36.451906690Z" level=info msg="StopPodSandbox for \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\"" Feb 13 19:24:36.452416 containerd[1536]: time="2025-02-13T19:24:36.452187619Z" level=info msg="TearDown network for sandbox \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\" successfully" Feb 13 19:24:36.452416 containerd[1536]: time="2025-02-13T19:24:36.452323810Z" level=info msg="StopPodSandbox for \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\" returns successfully" Feb 13 19:24:36.452740 containerd[1536]: time="2025-02-13T19:24:36.452675871Z" level=info msg="RemovePodSandbox for \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\"" Feb 13 19:24:36.452740 containerd[1536]: time="2025-02-13T19:24:36.452724329Z" level=info msg="Forcibly stopping sandbox \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\"" Feb 13 19:24:36.452921 containerd[1536]: time="2025-02-13T19:24:36.452841911Z" level=info msg="TearDown network for sandbox \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\" successfully" Feb 13 19:24:36.453187 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:24:36.458403 containerd[1536]: time="2025-02-13T19:24:36.458204834Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:24:36.458403 containerd[1536]: time="2025-02-13T19:24:36.458275542Z" level=info msg="RemovePodSandbox \"1a4740e861530de76b9626b19979ce66862e2df819469c9e26e865e7cd5d48ec\" returns successfully" Feb 13 19:24:36.721314 kubelet[2838]: E0213 19:24:36.721096 2838 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:24:37.049905 sshd[4786]: Connection closed by 139.178.89.65 port 57974 Feb 13 19:24:37.051292 sshd-session[4611]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:37.056703 systemd[1]: sshd@27-10.230.68.30:22-139.178.89.65:57974.service: Deactivated successfully. Feb 13 19:24:37.060322 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:24:37.061912 systemd-logind[1514]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:24:37.063831 systemd-logind[1514]: Removed session 28. Feb 13 19:24:37.073654 containerd[1536]: time="2025-02-13T19:24:37.073572884Z" level=info msg="CreateContainer within sandbox \"ca2ae2c0a5fbe8b1cdfb085b70971572c14bf4afc91b20ab6acdcbb6e7354f09\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:24:37.102823 containerd[1536]: time="2025-02-13T19:24:37.102378733Z" level=info msg="CreateContainer within sandbox \"ca2ae2c0a5fbe8b1cdfb085b70971572c14bf4afc91b20ab6acdcbb6e7354f09\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"041afa72ad6cb365d0c913ea06746975de46e46ec780538fdad4a89e0bae8a79\"" Feb 13 19:24:37.104450 containerd[1536]: time="2025-02-13T19:24:37.104300437Z" level=info msg="StartContainer for \"041afa72ad6cb365d0c913ea06746975de46e46ec780538fdad4a89e0bae8a79\"" Feb 13 19:24:37.158080 systemd[1]: Started cri-containerd-041afa72ad6cb365d0c913ea06746975de46e46ec780538fdad4a89e0bae8a79.scope - libcontainer container 041afa72ad6cb365d0c913ea06746975de46e46ec780538fdad4a89e0bae8a79. Feb 13 19:24:37.210951 containerd[1536]: time="2025-02-13T19:24:37.210879873Z" level=info msg="StartContainer for \"041afa72ad6cb365d0c913ea06746975de46e46ec780538fdad4a89e0bae8a79\" returns successfully" Feb 13 19:24:37.211226 systemd[1]: Started sshd@28-10.230.68.30:22-139.178.89.65:57984.service - OpenSSH per-connection server daemon (139.178.89.65:57984). Feb 13 19:24:37.221328 systemd[1]: cri-containerd-041afa72ad6cb365d0c913ea06746975de46e46ec780538fdad4a89e0bae8a79.scope: Deactivated successfully. Feb 13 19:24:37.282579 containerd[1536]: time="2025-02-13T19:24:37.282464532Z" level=info msg="shim disconnected" id=041afa72ad6cb365d0c913ea06746975de46e46ec780538fdad4a89e0bae8a79 namespace=k8s.io Feb 13 19:24:37.282579 containerd[1536]: time="2025-02-13T19:24:37.282570377Z" level=warning msg="cleaning up after shim disconnected" id=041afa72ad6cb365d0c913ea06746975de46e46ec780538fdad4a89e0bae8a79 namespace=k8s.io Feb 13 19:24:37.282579 containerd[1536]: time="2025-02-13T19:24:37.282587745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:24:37.566318 systemd[1]: run-containerd-runc-k8s.io-041afa72ad6cb365d0c913ea06746975de46e46ec780538fdad4a89e0bae8a79-runc.ySAzev.mount: Deactivated successfully. Feb 13 19:24:37.566547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-041afa72ad6cb365d0c913ea06746975de46e46ec780538fdad4a89e0bae8a79-rootfs.mount: Deactivated successfully. Feb 13 19:24:38.079387 containerd[1536]: time="2025-02-13T19:24:38.079319734Z" level=info msg="CreateContainer within sandbox \"ca2ae2c0a5fbe8b1cdfb085b70971572c14bf4afc91b20ab6acdcbb6e7354f09\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:24:38.111609 containerd[1536]: time="2025-02-13T19:24:38.111478682Z" level=info msg="CreateContainer within sandbox \"ca2ae2c0a5fbe8b1cdfb085b70971572c14bf4afc91b20ab6acdcbb6e7354f09\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c99e2442243a00026bfb1c8700300f5239a2bba98e662afec6bf7aeec31609eb\"" Feb 13 19:24:38.112999 containerd[1536]: time="2025-02-13T19:24:38.112056090Z" level=info msg="StartContainer for \"c99e2442243a00026bfb1c8700300f5239a2bba98e662afec6bf7aeec31609eb\"" Feb 13 19:24:38.137356 sshd[4822]: Accepted publickey for core from 139.178.89.65 port 57984 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 19:24:38.139949 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:38.150821 systemd-logind[1514]: New session 29 of user core. Feb 13 19:24:38.155978 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:24:38.168971 systemd[1]: Started cri-containerd-c99e2442243a00026bfb1c8700300f5239a2bba98e662afec6bf7aeec31609eb.scope - libcontainer container c99e2442243a00026bfb1c8700300f5239a2bba98e662afec6bf7aeec31609eb. Feb 13 19:24:38.215138 systemd[1]: cri-containerd-c99e2442243a00026bfb1c8700300f5239a2bba98e662afec6bf7aeec31609eb.scope: Deactivated successfully. Feb 13 19:24:38.217783 containerd[1536]: time="2025-02-13T19:24:38.217681536Z" level=info msg="StartContainer for \"c99e2442243a00026bfb1c8700300f5239a2bba98e662afec6bf7aeec31609eb\" returns successfully" Feb 13 19:24:38.246357 containerd[1536]: time="2025-02-13T19:24:38.246156498Z" level=info msg="shim disconnected" id=c99e2442243a00026bfb1c8700300f5239a2bba98e662afec6bf7aeec31609eb namespace=k8s.io Feb 13 19:24:38.246357 containerd[1536]: time="2025-02-13T19:24:38.246293840Z" level=warning msg="cleaning up after shim disconnected" id=c99e2442243a00026bfb1c8700300f5239a2bba98e662afec6bf7aeec31609eb namespace=k8s.io Feb 13 19:24:38.246357 containerd[1536]: time="2025-02-13T19:24:38.246314858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:24:38.565846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c99e2442243a00026bfb1c8700300f5239a2bba98e662afec6bf7aeec31609eb-rootfs.mount: Deactivated successfully. Feb 13 19:24:39.098810 containerd[1536]: time="2025-02-13T19:24:39.097321617Z" level=info msg="CreateContainer within sandbox \"ca2ae2c0a5fbe8b1cdfb085b70971572c14bf4afc91b20ab6acdcbb6e7354f09\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:24:39.136699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2887302069.mount: Deactivated successfully. Feb 13 19:24:39.139673 containerd[1536]: time="2025-02-13T19:24:39.139392238Z" level=info msg="CreateContainer within sandbox \"ca2ae2c0a5fbe8b1cdfb085b70971572c14bf4afc91b20ab6acdcbb6e7354f09\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"066f185d122494bb5b65fa7544e91a61843c6729bb5944eba7a0ce630f391c45\"" Feb 13 19:24:39.142032 containerd[1536]: time="2025-02-13T19:24:39.141564375Z" level=info msg="StartContainer for \"066f185d122494bb5b65fa7544e91a61843c6729bb5944eba7a0ce630f391c45\"" Feb 13 19:24:39.206249 systemd[1]: Started cri-containerd-066f185d122494bb5b65fa7544e91a61843c6729bb5944eba7a0ce630f391c45.scope - libcontainer container 066f185d122494bb5b65fa7544e91a61843c6729bb5944eba7a0ce630f391c45. Feb 13 19:24:39.268019 containerd[1536]: time="2025-02-13T19:24:39.267941268Z" level=info msg="StartContainer for \"066f185d122494bb5b65fa7544e91a61843c6729bb5944eba7a0ce630f391c45\" returns successfully" Feb 13 19:24:39.566135 systemd[1]: run-containerd-runc-k8s.io-066f185d122494bb5b65fa7544e91a61843c6729bb5944eba7a0ce630f391c45-runc.hihhNg.mount: Deactivated successfully. Feb 13 19:24:40.058818 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 19:24:40.272548 kubelet[2838]: I0213 19:24:40.271509 2838 setters.go:580] "Node became not ready" node="srv-g6z5b.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:24:40Z","lastTransitionTime":"2025-02-13T19:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:24:43.219560 systemd[1]: run-containerd-runc-k8s.io-066f185d122494bb5b65fa7544e91a61843c6729bb5944eba7a0ce630f391c45-runc.MVCbBp.mount: Deactivated successfully. Feb 13 19:24:43.806185 systemd-networkd[1458]: lxc_health: Link UP Feb 13 19:24:43.811391 systemd-networkd[1458]: lxc_health: Gained carrier Feb 13 19:24:45.646996 systemd-networkd[1458]: lxc_health: Gained IPv6LL Feb 13 19:24:45.673866 kubelet[2838]: I0213 19:24:45.673755 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2nh4t" podStartSLOduration=10.673702023 podStartE2EDuration="10.673702023s" podCreationTimestamp="2025-02-13 19:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:24:40.121950827 +0000 UTC m=+183.895049489" watchObservedRunningTime="2025-02-13 19:24:45.673702023 +0000 UTC m=+189.446800656" Feb 13 19:24:50.281613 sshd[4869]: Connection closed by 139.178.89.65 port 57984 Feb 13 19:24:50.283088 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:50.288462 systemd[1]: sshd@28-10.230.68.30:22-139.178.89.65:57984.service: Deactivated successfully. Feb 13 19:24:50.291980 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:24:50.293859 systemd-logind[1514]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:24:50.295275 systemd-logind[1514]: Removed session 29.