Jan 15 14:02:35.027549 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 15 14:02:35.027588 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 15 14:02:35.027601 kernel: BIOS-provided physical RAM map: Jan 15 14:02:35.027618 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 15 14:02:35.027627 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 15 14:02:35.027637 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 15 14:02:35.027648 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 15 14:02:35.027658 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 15 14:02:35.027667 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 15 14:02:35.027677 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 15 14:02:35.027687 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 14:02:35.027697 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 15 14:02:35.027720 kernel: NX (Execute Disable) protection: active Jan 15 14:02:35.027731 kernel: APIC: Static calls initialized Jan 15 14:02:35.027743 kernel: SMBIOS 2.8 present. Jan 15 14:02:35.027773 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 15 14:02:35.027797 kernel: Hypervisor detected: KVM Jan 15 14:02:35.027815 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 15 14:02:35.027826 kernel: kvm-clock: using sched offset of 4952112419 cycles Jan 15 14:02:35.027838 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 14:02:35.027849 kernel: tsc: Detected 2799.998 MHz processor Jan 15 14:02:35.027860 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 15 14:02:35.027871 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 15 14:02:35.027881 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 15 14:02:35.027892 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 15 14:02:35.027903 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 15 14:02:35.027919 kernel: Using GB pages for direct mapping Jan 15 14:02:35.027930 kernel: ACPI: Early table checksum verification disabled Jan 15 14:02:35.027941 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 15 14:02:35.027952 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:02:35.027963 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:02:35.027974 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:02:35.027984 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 15 14:02:35.027995 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:02:35.028006 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:02:35.028022 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:02:35.028033 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 14:02:35.028044 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 15 14:02:35.028055 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 15 14:02:35.028066 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 15 14:02:35.028083 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 15 14:02:35.028095 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 15 14:02:35.028111 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 15 14:02:35.028122 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 15 14:02:35.028134 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 15 14:02:35.028152 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 15 14:02:35.028164 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 15 14:02:35.028175 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 15 14:02:35.028186 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 15 14:02:35.028203 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 15 14:02:35.028214 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 15 14:02:35.028226 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 15 14:02:35.028237 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 15 14:02:35.028248 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 15 14:02:35.028259 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 15 14:02:35.028270 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 15 14:02:35.028281 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 15 14:02:35.028292 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 15 14:02:35.028308 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 15 14:02:35.028326 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 15 14:02:35.028338 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 15 14:02:35.028349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 15 14:02:35.028361 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 15 14:02:35.028372 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 15 14:02:35.028384 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 15 14:02:35.028395 kernel: Zone ranges: Jan 15 14:02:35.028407 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 15 14:02:35.028418 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 15 14:02:35.028436 kernel: Normal empty Jan 15 14:02:35.028448 kernel: Movable zone start for each node Jan 15 14:02:35.028459 kernel: Early memory node ranges Jan 15 14:02:35.028470 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 15 14:02:35.028482 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 15 14:02:35.028493 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 15 14:02:35.028504 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 14:02:35.028515 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 15 14:02:35.028532 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 15 14:02:35.028545 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 15 14:02:35.028562 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 15 14:02:35.028574 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 15 14:02:35.028585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 15 14:02:35.028596 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 15 14:02:35.028608 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 15 14:02:35.028619 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 15 14:02:35.028631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 15 14:02:35.028642 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 15 14:02:35.028654 kernel: TSC deadline timer available Jan 15 14:02:35.028670 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 15 14:02:35.028682 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 15 14:02:35.028693 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 15 14:02:35.028704 kernel: Booting paravirtualized kernel on KVM Jan 15 14:02:35.028716 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 15 14:02:35.028727 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 15 14:02:35.028739 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 15 14:02:35.028750 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 15 14:02:35.028800 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 15 14:02:35.028820 kernel: kvm-guest: PV spinlocks enabled Jan 15 14:02:35.028831 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 15 14:02:35.028844 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 15 14:02:35.028856 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 15 14:02:35.028867 kernel: random: crng init done Jan 15 14:02:35.028878 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 14:02:35.028890 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 15 14:02:35.028901 kernel: Fallback order for Node 0: 0 Jan 15 14:02:35.028918 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 15 14:02:35.028936 kernel: Policy zone: DMA32 Jan 15 14:02:35.028948 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 14:02:35.028960 kernel: software IO TLB: area num 16. Jan 15 14:02:35.028972 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 194828K reserved, 0K cma-reserved) Jan 15 14:02:35.028984 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 15 14:02:35.028995 kernel: Kernel/User page tables isolation: enabled Jan 15 14:02:35.029006 kernel: ftrace: allocating 37918 entries in 149 pages Jan 15 14:02:35.029024 kernel: ftrace: allocated 149 pages with 4 groups Jan 15 14:02:35.029036 kernel: Dynamic Preempt: voluntary Jan 15 14:02:35.029047 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 14:02:35.029060 kernel: rcu: RCU event tracing is enabled. Jan 15 14:02:35.029071 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 15 14:02:35.029083 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 14:02:35.029131 kernel: Rude variant of Tasks RCU enabled. Jan 15 14:02:35.029157 kernel: Tracing variant of Tasks RCU enabled. Jan 15 14:02:35.029169 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 14:02:35.029181 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 15 14:02:35.029193 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 15 14:02:35.029205 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 14:02:35.029231 kernel: Console: colour VGA+ 80x25 Jan 15 14:02:35.029244 kernel: printk: console [tty0] enabled Jan 15 14:02:35.029256 kernel: printk: console [ttyS0] enabled Jan 15 14:02:35.029268 kernel: ACPI: Core revision 20230628 Jan 15 14:02:35.029280 kernel: APIC: Switch to symmetric I/O mode setup Jan 15 14:02:35.029305 kernel: x2apic enabled Jan 15 14:02:35.029318 kernel: APIC: Switched APIC routing to: physical x2apic Jan 15 14:02:35.029335 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 15 14:02:35.029348 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 15 14:02:35.029361 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 15 14:02:35.029373 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 15 14:02:35.029385 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 15 14:02:35.029397 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 15 14:02:35.029408 kernel: Spectre V2 : Mitigation: Retpolines Jan 15 14:02:35.029420 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 15 14:02:35.029446 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 15 14:02:35.029468 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 15 14:02:35.029479 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 15 14:02:35.029491 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 15 14:02:35.029503 kernel: MDS: Mitigation: Clear CPU buffers Jan 15 14:02:35.029515 kernel: MMIO Stale Data: Unknown: No mitigations Jan 15 14:02:35.029527 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 15 14:02:35.029539 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 15 14:02:35.029551 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 15 14:02:35.029563 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 15 14:02:35.029574 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 15 14:02:35.029601 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 15 14:02:35.029613 kernel: Freeing SMP alternatives memory: 32K Jan 15 14:02:35.029630 kernel: pid_max: default: 32768 minimum: 301 Jan 15 14:02:35.029644 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 15 14:02:35.029656 kernel: landlock: Up and running. Jan 15 14:02:35.029668 kernel: SELinux: Initializing. Jan 15 14:02:35.029679 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 15 14:02:35.029691 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 15 14:02:35.029708 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 15 14:02:35.029721 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 15 14:02:35.029733 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 15 14:02:35.029878 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 15 14:02:35.029894 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 15 14:02:35.029906 kernel: signal: max sigframe size: 1776 Jan 15 14:02:35.029919 kernel: rcu: Hierarchical SRCU implementation. Jan 15 14:02:35.029931 kernel: rcu: Max phase no-delay instances is 400. Jan 15 14:02:35.029943 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 15 14:02:35.029955 kernel: smp: Bringing up secondary CPUs ... Jan 15 14:02:35.029967 kernel: smpboot: x86: Booting SMP configuration: Jan 15 14:02:35.029979 kernel: .... node #0, CPUs: #1 Jan 15 14:02:35.030027 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 15 14:02:35.030041 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 14:02:35.030053 kernel: smpboot: Max logical packages: 16 Jan 15 14:02:35.030065 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 15 14:02:35.030077 kernel: devtmpfs: initialized Jan 15 14:02:35.030089 kernel: x86/mm: Memory block size: 128MB Jan 15 14:02:35.030101 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 14:02:35.030113 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 15 14:02:35.030125 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 14:02:35.030154 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 14:02:35.030167 kernel: audit: initializing netlink subsys (disabled) Jan 15 14:02:35.030179 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 14:02:35.030191 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 15 14:02:35.030203 kernel: audit: type=2000 audit(1736949752.809:1): state=initialized audit_enabled=0 res=1 Jan 15 14:02:35.030215 kernel: cpuidle: using governor menu Jan 15 14:02:35.030227 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 14:02:35.030239 kernel: dca service started, version 1.12.1 Jan 15 14:02:35.030251 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 15 14:02:35.030278 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 15 14:02:35.030298 kernel: PCI: Using configuration type 1 for base access Jan 15 14:02:35.030311 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 15 14:02:35.030323 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 14:02:35.030335 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 14:02:35.030351 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 14:02:35.030363 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 14:02:35.030376 kernel: ACPI: Added _OSI(Module Device) Jan 15 14:02:35.030388 kernel: ACPI: Added _OSI(Processor Device) Jan 15 14:02:35.030414 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 15 14:02:35.030427 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 14:02:35.030439 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 14:02:35.030451 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 15 14:02:35.030463 kernel: ACPI: Interpreter enabled Jan 15 14:02:35.030475 kernel: ACPI: PM: (supports S0 S5) Jan 15 14:02:35.030487 kernel: ACPI: Using IOAPIC for interrupt routing Jan 15 14:02:35.030499 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 15 14:02:35.030511 kernel: PCI: Using E820 reservations for host bridge windows Jan 15 14:02:35.030538 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 15 14:02:35.030551 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 15 14:02:35.030898 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 15 14:02:35.031099 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 15 14:02:35.031274 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 15 14:02:35.031293 kernel: PCI host bridge to bus 0000:00 Jan 15 14:02:35.031470 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 15 14:02:35.031650 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 15 14:02:35.031837 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 15 14:02:35.031991 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 15 14:02:35.032147 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 15 14:02:35.032301 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 15 14:02:35.032458 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 15 14:02:35.032732 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 15 14:02:35.032951 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 15 14:02:35.033124 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 15 14:02:35.033295 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 15 14:02:35.033503 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 15 14:02:35.033675 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 15 14:02:35.033944 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 15 14:02:35.034143 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 15 14:02:35.034353 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 15 14:02:35.034575 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 15 14:02:35.038847 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 15 14:02:35.039064 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 15 14:02:35.039255 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 15 14:02:35.039484 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 15 14:02:35.039680 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 15 14:02:35.039930 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 15 14:02:35.040140 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 15 14:02:35.040362 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 15 14:02:35.040557 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 15 14:02:35.040754 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 15 14:02:35.042025 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 15 14:02:35.042209 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 15 14:02:35.042405 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 15 14:02:35.042595 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 15 14:02:35.043826 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 15 14:02:35.044020 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 15 14:02:35.044225 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 15 14:02:35.044471 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 15 14:02:35.044646 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 15 14:02:35.044848 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 15 14:02:35.045020 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 15 14:02:35.045220 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 15 14:02:35.045404 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 15 14:02:35.045655 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 15 14:02:35.047198 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 15 14:02:35.047377 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 15 14:02:35.047563 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 15 14:02:35.047734 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 15 14:02:35.048997 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 15 14:02:35.049204 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 15 14:02:35.049378 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 15 14:02:35.049547 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 15 14:02:35.049715 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 15 14:02:35.051971 kernel: pci_bus 0000:02: extended config space not accessible Jan 15 14:02:35.052189 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 15 14:02:35.052418 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 15 14:02:35.052603 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 15 14:02:35.052804 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 15 14:02:35.053045 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 15 14:02:35.053224 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 15 14:02:35.053397 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 15 14:02:35.053566 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 15 14:02:35.054809 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 15 14:02:35.055049 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 15 14:02:35.055236 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 15 14:02:35.055424 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 15 14:02:35.055600 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 15 14:02:35.055808 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 15 14:02:35.055984 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 15 14:02:35.056156 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 15 14:02:35.056349 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 15 14:02:35.056518 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 15 14:02:35.056684 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 15 14:02:35.058930 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 15 14:02:35.059114 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 15 14:02:35.059286 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 15 14:02:35.059453 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 15 14:02:35.059622 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 15 14:02:35.061834 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 15 14:02:35.062043 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 15 14:02:35.062222 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 15 14:02:35.062390 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 15 14:02:35.062556 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 15 14:02:35.062575 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 15 14:02:35.062589 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 15 14:02:35.062601 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 15 14:02:35.062639 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 15 14:02:35.062652 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 15 14:02:35.062665 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 15 14:02:35.062677 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 15 14:02:35.062690 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 15 14:02:35.062703 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 15 14:02:35.062715 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 15 14:02:35.062727 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 15 14:02:35.062740 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 15 14:02:35.062812 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 15 14:02:35.062826 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 15 14:02:35.062839 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 15 14:02:35.062851 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 15 14:02:35.062863 kernel: iommu: Default domain type: Translated Jan 15 14:02:35.062885 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 15 14:02:35.062898 kernel: PCI: Using ACPI for IRQ routing Jan 15 14:02:35.062910 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 15 14:02:35.062922 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 15 14:02:35.062953 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 15 14:02:35.063126 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 15 14:02:35.063293 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 15 14:02:35.063459 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 15 14:02:35.063478 kernel: vgaarb: loaded Jan 15 14:02:35.063491 kernel: clocksource: Switched to clocksource kvm-clock Jan 15 14:02:35.063503 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 14:02:35.063516 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 14:02:35.063548 kernel: pnp: PnP ACPI init Jan 15 14:02:35.065814 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 15 14:02:35.065838 kernel: pnp: PnP ACPI: found 5 devices Jan 15 14:02:35.065852 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 15 14:02:35.065864 kernel: NET: Registered PF_INET protocol family Jan 15 14:02:35.065877 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 14:02:35.065890 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 15 14:02:35.065902 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 14:02:35.065935 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 15 14:02:35.065949 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 15 14:02:35.065961 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 15 14:02:35.065973 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 15 14:02:35.065985 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 15 14:02:35.065998 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 14:02:35.066010 kernel: NET: Registered PF_XDP protocol family Jan 15 14:02:35.066186 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 15 14:02:35.066361 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 15 14:02:35.066555 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 15 14:02:35.066730 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 15 14:02:35.066952 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 15 14:02:35.067136 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 15 14:02:35.067305 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 15 14:02:35.067499 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 15 14:02:35.067690 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 15 14:02:35.069922 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 15 14:02:35.070094 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 15 14:02:35.070260 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 15 14:02:35.070429 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 15 14:02:35.070635 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 15 14:02:35.070863 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 15 14:02:35.071056 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 15 14:02:35.071312 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 15 14:02:35.071506 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 15 14:02:35.071674 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 15 14:02:35.071869 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 15 14:02:35.072037 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 15 14:02:35.072205 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 15 14:02:35.072383 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 15 14:02:35.072552 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 15 14:02:35.072744 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 15 14:02:35.074952 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 15 14:02:35.075121 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 15 14:02:35.075289 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 15 14:02:35.075455 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 15 14:02:35.075644 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 15 14:02:35.075840 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 15 14:02:35.076012 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 15 14:02:35.076182 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 15 14:02:35.076372 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 15 14:02:35.076559 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 15 14:02:35.076729 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 15 14:02:35.078953 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 15 14:02:35.079124 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 15 14:02:35.079321 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 15 14:02:35.079490 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 15 14:02:35.079656 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 15 14:02:35.080886 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 15 14:02:35.081058 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 15 14:02:35.081248 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 15 14:02:35.081415 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 15 14:02:35.081581 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 15 14:02:35.082837 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 15 14:02:35.083016 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 15 14:02:35.083184 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 15 14:02:35.083439 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 15 14:02:35.083682 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 15 14:02:35.084947 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 15 14:02:35.085136 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 15 14:02:35.085288 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 15 14:02:35.085443 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 15 14:02:35.085596 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 15 14:02:35.086825 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 15 14:02:35.086992 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 15 14:02:35.087151 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 15 14:02:35.087344 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 15 14:02:35.087515 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 15 14:02:35.087690 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 15 14:02:35.087889 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 15 14:02:35.088069 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 15 14:02:35.088228 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 15 14:02:35.088386 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 15 14:02:35.088624 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 15 14:02:35.088824 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 15 14:02:35.089015 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 15 14:02:35.089223 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 15 14:02:35.089385 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 15 14:02:35.089550 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 15 14:02:35.089768 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 15 14:02:35.089963 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 15 14:02:35.090122 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 15 14:02:35.090319 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 15 14:02:35.090483 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 15 14:02:35.090658 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 15 14:02:35.090885 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 15 14:02:35.091083 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 15 14:02:35.091280 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 15 14:02:35.091302 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 15 14:02:35.091316 kernel: PCI: CLS 0 bytes, default 64 Jan 15 14:02:35.091329 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 15 14:02:35.091343 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 15 14:02:35.091355 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 15 14:02:35.091368 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 15 14:02:35.091390 kernel: Initialise system trusted keyrings Jan 15 14:02:35.091425 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 15 14:02:35.091438 kernel: Key type asymmetric registered Jan 15 14:02:35.091451 kernel: Asymmetric key parser 'x509' registered Jan 15 14:02:35.091464 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 15 14:02:35.091477 kernel: io scheduler mq-deadline registered Jan 15 14:02:35.091489 kernel: io scheduler kyber registered Jan 15 14:02:35.091512 kernel: io scheduler bfq registered Jan 15 14:02:35.091702 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 15 14:02:35.091958 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 15 14:02:35.092152 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:02:35.092322 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 15 14:02:35.092489 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 15 14:02:35.092654 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:02:35.092882 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 15 14:02:35.093051 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 15 14:02:35.093247 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:02:35.093419 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 15 14:02:35.093663 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 15 14:02:35.093866 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:02:35.094036 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 15 14:02:35.094201 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 15 14:02:35.094391 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:02:35.094560 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 15 14:02:35.094745 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 15 14:02:35.094948 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:02:35.095118 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 15 14:02:35.095284 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 15 14:02:35.095472 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:02:35.095648 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 15 14:02:35.095854 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 15 14:02:35.096021 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 15 14:02:35.096043 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 15 14:02:35.096057 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 15 14:02:35.096090 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 15 14:02:35.096104 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 14:02:35.096117 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 15 14:02:35.096130 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 15 14:02:35.096142 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 15 14:02:35.096155 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 15 14:02:35.096332 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 15 14:02:35.096354 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 15 14:02:35.096527 kernel: rtc_cmos 00:03: registered as rtc0 Jan 15 14:02:35.096695 kernel: rtc_cmos 00:03: setting system clock to 2025-01-15T14:02:34 UTC (1736949754) Jan 15 14:02:35.096904 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 15 14:02:35.096925 kernel: intel_pstate: CPU model not supported Jan 15 14:02:35.096938 kernel: NET: Registered PF_INET6 protocol family Jan 15 14:02:35.096952 kernel: Segment Routing with IPv6 Jan 15 14:02:35.096965 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 14:02:35.096977 kernel: NET: Registered PF_PACKET protocol family Jan 15 14:02:35.096990 kernel: Key type dns_resolver registered Jan 15 14:02:35.097023 kernel: IPI shorthand broadcast: enabled Jan 15 14:02:35.097037 kernel: sched_clock: Marking stable (1658004028, 227369027)->(2029094338, -143721283) Jan 15 14:02:35.097049 kernel: registered taskstats version 1 Jan 15 14:02:35.097062 kernel: Loading compiled-in X.509 certificates Jan 15 14:02:35.097075 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 15 14:02:35.097088 kernel: Key type .fscrypt registered Jan 15 14:02:35.097100 kernel: Key type fscrypt-provisioning registered Jan 15 14:02:35.097113 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 14:02:35.097140 kernel: ima: Allocated hash algorithm: sha1 Jan 15 14:02:35.097154 kernel: ima: No architecture policies found Jan 15 14:02:35.097166 kernel: clk: Disabling unused clocks Jan 15 14:02:35.097179 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 15 14:02:35.097192 kernel: Write protecting the kernel read-only data: 36864k Jan 15 14:02:35.097205 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 15 14:02:35.097218 kernel: Run /init as init process Jan 15 14:02:35.097231 kernel: with arguments: Jan 15 14:02:35.097244 kernel: /init Jan 15 14:02:35.097256 kernel: with environment: Jan 15 14:02:35.097283 kernel: HOME=/ Jan 15 14:02:35.097296 kernel: TERM=linux Jan 15 14:02:35.097309 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 15 14:02:35.097325 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 14:02:35.097341 systemd[1]: Detected virtualization kvm. Jan 15 14:02:35.097355 systemd[1]: Detected architecture x86-64. Jan 15 14:02:35.097368 systemd[1]: Running in initrd. Jan 15 14:02:35.097396 systemd[1]: No hostname configured, using default hostname. Jan 15 14:02:35.097410 systemd[1]: Hostname set to . Jan 15 14:02:35.097424 systemd[1]: Initializing machine ID from VM UUID. Jan 15 14:02:35.097438 systemd[1]: Queued start job for default target initrd.target. Jan 15 14:02:35.097451 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 14:02:35.097465 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 14:02:35.097479 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 14:02:35.097493 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 14:02:35.097524 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 14:02:35.097540 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 14:02:35.097555 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 14:02:35.097569 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 14:02:35.097583 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 14:02:35.097597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 14:02:35.097610 systemd[1]: Reached target paths.target - Path Units. Jan 15 14:02:35.097648 systemd[1]: Reached target slices.target - Slice Units. Jan 15 14:02:35.097663 systemd[1]: Reached target swap.target - Swaps. Jan 15 14:02:35.097677 systemd[1]: Reached target timers.target - Timer Units. Jan 15 14:02:35.097691 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 14:02:35.097705 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 14:02:35.097718 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 14:02:35.097733 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 14:02:35.097746 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 14:02:35.097808 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 14:02:35.097843 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 14:02:35.097857 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 14:02:35.097871 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 14:02:35.097885 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 14:02:35.097899 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 14:02:35.097912 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 14:02:35.097926 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 14:02:35.097940 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 14:02:35.097969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 14:02:35.098024 systemd-journald[201]: Collecting audit messages is disabled. Jan 15 14:02:35.098055 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 14:02:35.098070 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 14:02:35.098101 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 14:02:35.098117 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 14:02:35.098131 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 14:02:35.098145 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 14:02:35.098171 kernel: Bridge firewalling registered Jan 15 14:02:35.098186 systemd-journald[201]: Journal started Jan 15 14:02:35.098211 systemd-journald[201]: Runtime Journal (/run/log/journal/316d49fb899f434ab5672937d9d962b8) is 4.7M, max 38.0M, 33.2M free. Jan 15 14:02:35.038017 systemd-modules-load[202]: Inserted module 'overlay' Jan 15 14:02:35.143909 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 14:02:35.086492 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 15 14:02:35.146170 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 14:02:35.147193 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 14:02:35.162068 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 14:02:35.171128 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 14:02:35.174300 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 14:02:35.183710 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 14:02:35.188545 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 14:02:35.204534 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 14:02:35.205889 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 14:02:35.209326 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 14:02:35.216993 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 14:02:35.219969 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 14:02:35.234443 dracut-cmdline[237]: dracut-dracut-053 Jan 15 14:02:35.242103 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 15 14:02:35.270945 systemd-resolved[238]: Positive Trust Anchors: Jan 15 14:02:35.270967 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 14:02:35.271009 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 14:02:35.274552 systemd-resolved[238]: Defaulting to hostname 'linux'. Jan 15 14:02:35.276371 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 14:02:35.281445 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 14:02:35.359835 kernel: SCSI subsystem initialized Jan 15 14:02:35.371837 kernel: Loading iSCSI transport class v2.0-870. Jan 15 14:02:35.384798 kernel: iscsi: registered transport (tcp) Jan 15 14:02:35.411107 kernel: iscsi: registered transport (qla4xxx) Jan 15 14:02:35.411196 kernel: QLogic iSCSI HBA Driver Jan 15 14:02:35.468116 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 14:02:35.482032 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 14:02:35.512051 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 14:02:35.512185 kernel: device-mapper: uevent: version 1.0.3 Jan 15 14:02:35.512208 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 15 14:02:35.560822 kernel: raid6: sse2x4 gen() 14034 MB/s Jan 15 14:02:35.578796 kernel: raid6: sse2x2 gen() 10038 MB/s Jan 15 14:02:35.597320 kernel: raid6: sse2x1 gen() 10324 MB/s Jan 15 14:02:35.597377 kernel: raid6: using algorithm sse2x4 gen() 14034 MB/s Jan 15 14:02:35.616313 kernel: raid6: .... xor() 7962 MB/s, rmw enabled Jan 15 14:02:35.616385 kernel: raid6: using ssse3x2 recovery algorithm Jan 15 14:02:35.641822 kernel: xor: automatically using best checksumming function avx Jan 15 14:02:35.825819 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 14:02:35.841444 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 14:02:35.851000 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 14:02:35.883306 systemd-udevd[421]: Using default interface naming scheme 'v255'. Jan 15 14:02:35.891197 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 14:02:35.906239 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 14:02:35.938552 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Jan 15 14:02:35.980737 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 14:02:35.988070 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 14:02:36.118180 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 14:02:36.126999 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 14:02:36.157870 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 14:02:36.159483 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 14:02:36.161056 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 14:02:36.163377 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 14:02:36.176211 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 14:02:36.204937 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 14:02:36.276858 kernel: cryptd: max_cpu_qlen set to 1000 Jan 15 14:02:36.288232 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 15 14:02:36.329345 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 15 14:02:36.329594 kernel: ACPI: bus type USB registered Jan 15 14:02:36.329616 kernel: usbcore: registered new interface driver usbfs Jan 15 14:02:36.329642 kernel: usbcore: registered new interface driver hub Jan 15 14:02:36.329660 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 15 14:02:36.329677 kernel: GPT:17805311 != 125829119 Jan 15 14:02:36.329693 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 15 14:02:36.329744 kernel: GPT:17805311 != 125829119 Jan 15 14:02:36.329798 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 15 14:02:36.329818 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 14:02:36.329834 kernel: usbcore: registered new device driver usb Jan 15 14:02:36.329851 kernel: AVX version of gcm_enc/dec engaged. Jan 15 14:02:36.329868 kernel: AES CTR mode by8 optimization enabled Jan 15 14:02:36.315277 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 14:02:36.315462 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 14:02:36.334249 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 14:02:36.335152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 14:02:36.335430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 14:02:36.337927 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 14:02:36.349101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 14:02:36.413788 kernel: libata version 3.00 loaded. Jan 15 14:02:36.427799 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 15 14:02:36.459942 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 15 14:02:36.460173 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 15 14:02:36.460422 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (469) Jan 15 14:02:36.460443 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (481) Jan 15 14:02:36.460488 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 15 14:02:36.460689 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 15 14:02:36.460939 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 15 14:02:36.461162 kernel: hub 1-0:1.0: USB hub found Jan 15 14:02:36.461445 kernel: hub 1-0:1.0: 4 ports detected Jan 15 14:02:36.461644 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 15 14:02:36.463923 kernel: hub 2-0:1.0: USB hub found Jan 15 14:02:36.464190 kernel: hub 2-0:1.0: 4 ports detected Jan 15 14:02:36.457155 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 15 14:02:36.514648 kernel: ahci 0000:00:1f.2: version 3.0 Jan 15 14:02:36.533744 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 15 14:02:36.533851 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 15 14:02:36.534086 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 15 14:02:36.534314 kernel: scsi host0: ahci Jan 15 14:02:36.534533 kernel: scsi host1: ahci Jan 15 14:02:36.535591 kernel: scsi host2: ahci Jan 15 14:02:36.535869 kernel: scsi host3: ahci Jan 15 14:02:36.536068 kernel: scsi host4: ahci Jan 15 14:02:36.536337 kernel: scsi host5: ahci Jan 15 14:02:36.536607 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 15 14:02:36.536631 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 15 14:02:36.536649 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 15 14:02:36.536666 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 15 14:02:36.536684 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 15 14:02:36.536701 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 15 14:02:36.520155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 14:02:36.541902 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 15 14:02:36.549216 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 14:02:36.554813 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 15 14:02:36.555651 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 15 14:02:36.563993 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 14:02:36.566147 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 14:02:36.580794 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 14:02:36.582840 disk-uuid[566]: Primary Header is updated. Jan 15 14:02:36.582840 disk-uuid[566]: Secondary Entries is updated. Jan 15 14:02:36.582840 disk-uuid[566]: Secondary Header is updated. Jan 15 14:02:36.595391 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 14:02:36.711894 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 15 14:02:36.845895 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 15 14:02:36.847782 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 15 14:02:36.849914 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 15 14:02:36.849951 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 15 14:02:36.852682 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 15 14:02:36.854783 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 15 14:02:36.856791 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 14:02:36.867590 kernel: usbcore: registered new interface driver usbhid Jan 15 14:02:36.867636 kernel: usbhid: USB HID core driver Jan 15 14:02:36.874525 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 15 14:02:36.874613 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 15 14:02:37.610848 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 14:02:37.612247 disk-uuid[572]: The operation has completed successfully. Jan 15 14:02:37.660867 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 14:02:37.661078 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 14:02:37.684035 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 14:02:37.691286 sh[589]: Success Jan 15 14:02:37.711255 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 15 14:02:37.775742 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 14:02:37.787291 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 14:02:37.789258 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 14:02:37.823968 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 15 14:02:37.824038 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 15 14:02:37.824095 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 15 14:02:37.826364 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 14:02:37.829431 kernel: BTRFS info (device dm-0): using free space tree Jan 15 14:02:37.840190 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 14:02:37.841791 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 14:02:37.850126 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 14:02:37.853977 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 14:02:37.875296 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 14:02:37.875390 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 14:02:37.875411 kernel: BTRFS info (device vda6): using free space tree Jan 15 14:02:37.882805 kernel: BTRFS info (device vda6): auto enabling async discard Jan 15 14:02:37.899798 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 14:02:37.899822 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 15 14:02:37.908109 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 14:02:37.914009 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 14:02:38.073306 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 14:02:38.099484 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 14:02:38.133626 systemd-networkd[772]: lo: Link UP Jan 15 14:02:38.133641 systemd-networkd[772]: lo: Gained carrier Jan 15 14:02:38.140316 systemd-networkd[772]: Enumeration completed Jan 15 14:02:38.140507 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 14:02:38.141426 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 14:02:38.141432 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 14:02:38.141436 systemd[1]: Reached target network.target - Network. Jan 15 14:02:38.144305 systemd-networkd[772]: eth0: Link UP Jan 15 14:02:38.144311 systemd-networkd[772]: eth0: Gained carrier Jan 15 14:02:38.144322 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 14:02:38.149861 ignition[690]: Ignition 2.19.0 Jan 15 14:02:38.149876 ignition[690]: Stage: fetch-offline Jan 15 14:02:38.149945 ignition[690]: no configs at "/usr/lib/ignition/base.d" Jan 15 14:02:38.149965 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:02:38.150106 ignition[690]: parsed url from cmdline: "" Jan 15 14:02:38.153539 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 14:02:38.150113 ignition[690]: no config URL provided Jan 15 14:02:38.150134 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 14:02:38.150150 ignition[690]: no config at "/usr/lib/ignition/user.ign" Jan 15 14:02:38.150159 ignition[690]: failed to fetch config: resource requires networking Jan 15 14:02:38.151316 ignition[690]: Ignition finished successfully Jan 15 14:02:38.170249 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 14:02:38.172463 systemd-networkd[772]: eth0: DHCPv4 address 10.230.66.178/30, gateway 10.230.66.177 acquired from 10.230.66.177 Jan 15 14:02:38.191858 ignition[779]: Ignition 2.19.0 Jan 15 14:02:38.191886 ignition[779]: Stage: fetch Jan 15 14:02:38.192305 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 15 14:02:38.192328 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:02:38.192587 ignition[779]: parsed url from cmdline: "" Jan 15 14:02:38.192598 ignition[779]: no config URL provided Jan 15 14:02:38.192609 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 14:02:38.192627 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jan 15 14:02:38.196467 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 15 14:02:38.196577 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 15 14:02:38.196788 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 15 14:02:38.214175 ignition[779]: GET result: OK Jan 15 14:02:38.214824 ignition[779]: parsing config with SHA512: f459077dc4827e0a33f3cd9ea7b96c688985248ab58a3def7fc88d75937fda377cd374fafa1e0cbf44082e9a62b8ccea5ebb4809849ea23b8d754cd8575e2061 Jan 15 14:02:38.223267 unknown[779]: fetched base config from "system" Jan 15 14:02:38.223292 unknown[779]: fetched base config from "system" Jan 15 14:02:38.224173 ignition[779]: fetch: fetch complete Jan 15 14:02:38.223302 unknown[779]: fetched user config from "openstack" Jan 15 14:02:38.224181 ignition[779]: fetch: fetch passed Jan 15 14:02:38.224271 ignition[779]: Ignition finished successfully Jan 15 14:02:38.227387 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 14:02:38.246045 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 14:02:38.284167 ignition[786]: Ignition 2.19.0 Jan 15 14:02:38.284191 ignition[786]: Stage: kargs Jan 15 14:02:38.287393 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 14:02:38.284496 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 15 14:02:38.284518 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:02:38.285752 ignition[786]: kargs: kargs passed Jan 15 14:02:38.285849 ignition[786]: Ignition finished successfully Jan 15 14:02:38.299216 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 14:02:38.314845 ignition[792]: Ignition 2.19.0 Jan 15 14:02:38.314869 ignition[792]: Stage: disks Jan 15 14:02:38.315129 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 15 14:02:38.315157 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:02:38.317522 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 14:02:38.316250 ignition[792]: disks: disks passed Jan 15 14:02:38.319705 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 14:02:38.316328 ignition[792]: Ignition finished successfully Jan 15 14:02:38.320484 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 14:02:38.321769 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 14:02:38.323248 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 14:02:38.324569 systemd[1]: Reached target basic.target - Basic System. Jan 15 14:02:38.335033 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 14:02:38.353987 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 15 14:02:38.356622 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 14:02:38.363984 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 14:02:38.490792 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 15 14:02:38.492307 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 14:02:38.493662 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 14:02:38.499918 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 14:02:38.506858 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 14:02:38.507963 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 15 14:02:38.508900 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 15 14:02:38.512075 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 14:02:38.512120 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 14:02:38.521326 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 14:02:38.526781 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) Jan 15 14:02:38.530453 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 14:02:38.530495 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 14:02:38.531985 kernel: BTRFS info (device vda6): using free space tree Jan 15 14:02:38.535977 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 14:02:38.540944 kernel: BTRFS info (device vda6): auto enabling async discard Jan 15 14:02:38.543822 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 14:02:38.612474 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 14:02:38.620112 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 15 14:02:38.628043 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 14:02:38.634497 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 14:02:38.741486 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 14:02:38.746919 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 14:02:38.748970 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 14:02:38.764794 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 14:02:38.796199 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 14:02:38.800794 ignition[931]: INFO : Ignition 2.19.0 Jan 15 14:02:38.802439 ignition[931]: INFO : Stage: mount Jan 15 14:02:38.802439 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 14:02:38.802439 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:02:38.805072 ignition[931]: INFO : mount: mount passed Jan 15 14:02:38.805072 ignition[931]: INFO : Ignition finished successfully Jan 15 14:02:38.804926 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 14:02:38.821341 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 14:02:39.846041 systemd-networkd[772]: eth0: Gained IPv6LL Jan 15 14:02:41.352518 systemd-networkd[772]: eth0: Ignoring DHCPv6 address 2a02:1348:179:90ac:24:19ff:fee6:42b2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:90ac:24:19ff:fee6:42b2/64 assigned by NDisc. Jan 15 14:02:41.352536 systemd-networkd[772]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 15 14:02:45.695631 coreos-metadata[812]: Jan 15 14:02:45.695 WARN failed to locate config-drive, using the metadata service API instead Jan 15 14:02:45.718741 coreos-metadata[812]: Jan 15 14:02:45.718 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 15 14:02:45.735016 coreos-metadata[812]: Jan 15 14:02:45.734 INFO Fetch successful Jan 15 14:02:45.735957 coreos-metadata[812]: Jan 15 14:02:45.735 INFO wrote hostname srv-6ftsm.gb1.brightbox.com to /sysroot/etc/hostname Jan 15 14:02:45.737745 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 15 14:02:45.737931 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 15 14:02:45.745881 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 14:02:45.763012 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 14:02:45.779377 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (948) Jan 15 14:02:45.787782 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 15 14:02:45.787844 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 14:02:45.789548 kernel: BTRFS info (device vda6): using free space tree Jan 15 14:02:45.796799 kernel: BTRFS info (device vda6): auto enabling async discard Jan 15 14:02:45.802582 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 14:02:45.862320 ignition[966]: INFO : Ignition 2.19.0 Jan 15 14:02:45.864700 ignition[966]: INFO : Stage: files Jan 15 14:02:45.864700 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 14:02:45.864700 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:02:45.868446 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Jan 15 14:02:45.880015 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 14:02:45.880015 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 14:02:45.884332 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 14:02:45.885534 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 14:02:45.886553 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 14:02:45.886250 unknown[966]: wrote ssh authorized keys file for user: core Jan 15 14:02:45.888809 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 15 14:02:45.888809 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 15 14:02:46.011089 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 14:02:46.373081 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 15 14:02:46.373081 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 15 14:02:46.375693 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 15 14:02:47.079265 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 15 14:02:47.648799 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 15 14:02:47.650157 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 15 14:02:47.650157 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 14:02:47.650157 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 14:02:47.650157 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 14:02:47.650157 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 14:02:47.650157 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 14:02:47.650157 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 14:02:47.650157 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 14:02:47.658742 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 14:02:47.658742 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 14:02:47.658742 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 15 14:02:47.658742 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 15 14:02:47.658742 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 15 14:02:47.658742 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 15 14:02:48.268192 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 15 14:02:50.532273 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 15 14:02:50.532273 ignition[966]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 15 14:02:50.537710 ignition[966]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 14:02:50.537710 ignition[966]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 14:02:50.537710 ignition[966]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 15 14:02:50.537710 ignition[966]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 15 14:02:50.537710 ignition[966]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 14:02:50.537710 ignition[966]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 14:02:50.545419 ignition[966]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 14:02:50.545419 ignition[966]: INFO : files: files passed Jan 15 14:02:50.545419 ignition[966]: INFO : Ignition finished successfully Jan 15 14:02:50.544933 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 14:02:50.559257 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 14:02:50.563101 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 14:02:50.602199 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 14:02:50.602494 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 14:02:50.613484 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 14:02:50.615141 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 14:02:50.616279 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 14:02:50.618423 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 14:02:50.619623 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 14:02:50.626009 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 14:02:50.675198 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 14:02:50.675363 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 14:02:50.677583 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 14:02:50.678657 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 14:02:50.680324 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 14:02:50.684969 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 14:02:50.705117 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 14:02:50.713028 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 14:02:50.727538 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 14:02:50.728461 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 14:02:50.730216 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 14:02:50.731654 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 14:02:50.731826 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 14:02:50.733636 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 14:02:50.734599 systemd[1]: Stopped target basic.target - Basic System. Jan 15 14:02:50.736019 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 14:02:50.737274 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 14:02:50.738717 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 14:02:50.740301 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 14:02:50.741946 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 14:02:50.743550 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 14:02:50.745059 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 14:02:50.746674 systemd[1]: Stopped target swap.target - Swaps. Jan 15 14:02:50.748104 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 14:02:50.748299 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 14:02:50.750017 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 14:02:50.750973 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 14:02:50.752394 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 14:02:50.752634 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 14:02:50.754095 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 14:02:50.754301 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 14:02:50.756354 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 14:02:50.756561 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 14:02:50.758293 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 14:02:50.758482 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 14:02:50.767136 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 14:02:50.767935 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 14:02:50.768234 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 14:02:50.774003 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 14:02:50.774710 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 14:02:50.774964 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 14:02:50.784036 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 14:02:50.784284 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 14:02:50.797806 ignition[1019]: INFO : Ignition 2.19.0 Jan 15 14:02:50.797806 ignition[1019]: INFO : Stage: umount Jan 15 14:02:50.797806 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 14:02:50.797806 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 15 14:02:50.797806 ignition[1019]: INFO : umount: umount passed Jan 15 14:02:50.797806 ignition[1019]: INFO : Ignition finished successfully Jan 15 14:02:50.796065 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 14:02:50.796253 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 14:02:50.802436 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 14:02:50.802588 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 14:02:50.806315 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 14:02:50.806428 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 14:02:50.807591 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 14:02:50.807686 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 14:02:50.809991 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 14:02:50.810070 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 14:02:50.813200 systemd[1]: Stopped target network.target - Network. Jan 15 14:02:50.814549 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 14:02:50.814623 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 14:02:50.817087 systemd[1]: Stopped target paths.target - Path Units. Jan 15 14:02:50.818363 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 14:02:50.821827 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 14:02:50.823108 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 14:02:50.825357 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 14:02:50.826104 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 14:02:50.826199 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 14:02:50.829166 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 14:02:50.829259 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 14:02:50.830703 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 14:02:50.830836 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 14:02:50.832339 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 14:02:50.832410 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 14:02:50.834024 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 14:02:50.836563 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 14:02:50.839952 systemd-networkd[772]: eth0: DHCPv6 lease lost Jan 15 14:02:50.843976 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 14:02:50.845016 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 14:02:50.845142 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 14:02:50.849196 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 14:02:50.850113 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 14:02:50.852443 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 14:02:50.853910 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 14:02:50.860548 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 14:02:50.860668 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 14:02:50.862237 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 14:02:50.862362 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 14:02:50.868932 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 14:02:50.869648 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 14:02:50.869754 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 14:02:50.870551 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 14:02:50.870619 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 14:02:50.872344 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 14:02:50.872414 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 14:02:50.874090 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 14:02:50.874158 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 14:02:50.878955 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 14:02:50.894304 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 14:02:50.894966 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 14:02:50.896889 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 14:02:50.897018 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 14:02:50.899283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 14:02:50.899383 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 14:02:50.901018 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 14:02:50.901088 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 14:02:50.902616 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 14:02:50.902710 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 14:02:50.904829 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 14:02:50.904903 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 14:02:50.906268 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 14:02:50.906365 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 14:02:50.917011 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 14:02:50.919130 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 14:02:50.919210 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 14:02:50.920824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 14:02:50.920948 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 14:02:50.928583 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 14:02:50.928732 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 14:02:50.931269 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 14:02:50.937985 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 14:02:50.948632 systemd[1]: Switching root. Jan 15 14:02:50.994887 systemd-journald[201]: Journal stopped Jan 15 14:02:52.476335 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 15 14:02:52.476476 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 14:02:52.476544 kernel: SELinux: policy capability open_perms=1 Jan 15 14:02:52.476574 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 14:02:52.476594 kernel: SELinux: policy capability always_check_network=0 Jan 15 14:02:52.476630 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 14:02:52.476651 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 14:02:52.476675 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 14:02:52.476699 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 14:02:52.476717 kernel: audit: type=1403 audit(1736949771.270:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 14:02:52.476737 systemd[1]: Successfully loaded SELinux policy in 54.770ms. Jan 15 14:02:52.476779 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.438ms. Jan 15 14:02:52.476806 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 14:02:52.476826 systemd[1]: Detected virtualization kvm. Jan 15 14:02:52.476890 systemd[1]: Detected architecture x86-64. Jan 15 14:02:52.476912 systemd[1]: Detected first boot. Jan 15 14:02:52.476931 systemd[1]: Hostname set to . Jan 15 14:02:52.476951 systemd[1]: Initializing machine ID from VM UUID. Jan 15 14:02:52.476970 zram_generator::config[1061]: No configuration found. Jan 15 14:02:52.477012 systemd[1]: Populated /etc with preset unit settings. Jan 15 14:02:52.477035 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 14:02:52.477054 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 14:02:52.477090 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 14:02:52.477112 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 14:02:52.477132 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 14:02:52.477151 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 14:02:52.477171 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 14:02:52.477190 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 14:02:52.477209 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 14:02:52.477228 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 14:02:52.477262 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 14:02:52.477283 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 14:02:52.477303 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 14:02:52.477323 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 14:02:52.477342 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 14:02:52.477361 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 14:02:52.477380 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 14:02:52.477399 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 15 14:02:52.477419 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 14:02:52.477452 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 14:02:52.477475 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 14:02:52.477504 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 14:02:52.477525 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 14:02:52.477544 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 14:02:52.477564 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 14:02:52.477599 systemd[1]: Reached target slices.target - Slice Units. Jan 15 14:02:52.477621 systemd[1]: Reached target swap.target - Swaps. Jan 15 14:02:52.477640 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 14:02:52.477659 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 14:02:52.477679 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 14:02:52.479815 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 14:02:52.479901 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 14:02:52.479961 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 14:02:52.480004 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 14:02:52.480040 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 14:02:52.480060 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 14:02:52.480086 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:02:52.480108 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 14:02:52.480127 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 14:02:52.480146 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 14:02:52.480196 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 14:02:52.480220 systemd[1]: Reached target machines.target - Containers. Jan 15 14:02:52.480240 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 14:02:52.480259 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 14:02:52.480278 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 14:02:52.480297 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 14:02:52.480317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 14:02:52.480343 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 14:02:52.480388 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 14:02:52.480432 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 14:02:52.480454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 14:02:52.480475 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 14:02:52.480507 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 14:02:52.480528 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 14:02:52.480557 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 14:02:52.480577 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 14:02:52.480597 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 14:02:52.480642 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 14:02:52.480665 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 14:02:52.480685 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 14:02:52.480705 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 14:02:52.480725 systemd[1]: verity-setup.service: Deactivated successfully. Jan 15 14:02:52.480743 systemd[1]: Stopped verity-setup.service. Jan 15 14:02:52.480778 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:02:52.482791 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 14:02:52.482827 kernel: fuse: init (API version 7.39) Jan 15 14:02:52.482885 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 14:02:52.482949 systemd-journald[1150]: Collecting audit messages is disabled. Jan 15 14:02:52.482984 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 14:02:52.483036 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 14:02:52.483060 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 14:02:52.483093 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 14:02:52.483112 systemd-journald[1150]: Journal started Jan 15 14:02:52.483156 systemd-journald[1150]: Runtime Journal (/run/log/journal/316d49fb899f434ab5672937d9d962b8) is 4.7M, max 38.0M, 33.2M free. Jan 15 14:02:52.084260 systemd[1]: Queued start job for default target multi-user.target. Jan 15 14:02:52.103935 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 15 14:02:52.104640 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 14:02:52.487984 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 14:02:52.491896 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 14:02:52.494222 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 14:02:52.495728 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 14:02:52.503479 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 14:02:52.504742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 14:02:52.505026 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 14:02:52.506313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 14:02:52.506521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 14:02:52.507607 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 14:02:52.508696 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 14:02:52.511719 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 14:02:52.512052 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 14:02:52.532030 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 14:02:52.544278 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 14:02:52.565171 kernel: loop: module loaded Jan 15 14:02:52.572791 kernel: ACPI: bus type drm_connector registered Jan 15 14:02:52.568711 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 14:02:52.577852 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 14:02:52.579883 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 14:02:52.579950 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 14:02:52.583420 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 15 14:02:52.595609 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 14:02:52.600914 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 14:02:52.602734 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 14:02:52.606930 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 14:02:52.614557 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 14:02:52.615414 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 14:02:52.618958 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 14:02:52.622012 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 14:02:52.629938 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 14:02:52.636989 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 14:02:52.642397 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 14:02:52.643851 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 14:02:52.645165 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 14:02:52.646861 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 14:02:52.647976 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 14:02:52.650290 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 14:02:52.652373 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 14:02:52.662283 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 14:02:52.691414 systemd-journald[1150]: Time spent on flushing to /var/log/journal/316d49fb899f434ab5672937d9d962b8 is 236.335ms for 1141 entries. Jan 15 14:02:52.691414 systemd-journald[1150]: System Journal (/var/log/journal/316d49fb899f434ab5672937d9d962b8) is 8.0M, max 584.8M, 576.8M free. Jan 15 14:02:52.960396 systemd-journald[1150]: Received client request to flush runtime journal. Jan 15 14:02:52.960502 kernel: loop0: detected capacity change from 0 to 211296 Jan 15 14:02:52.960608 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 14:02:52.799231 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 14:02:52.801100 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 14:02:52.811002 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 15 14:02:52.965135 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 14:02:52.972262 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 14:02:52.974457 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 14:02:52.976529 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 15 14:02:52.996926 kernel: loop1: detected capacity change from 0 to 142488 Jan 15 14:02:53.032110 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 14:02:53.046183 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 15 14:02:53.108142 kernel: loop2: detected capacity change from 0 to 140768 Jan 15 14:02:53.140344 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 15 14:02:53.146976 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 14:02:53.155140 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 14:02:53.171329 kernel: loop3: detected capacity change from 0 to 8 Jan 15 14:02:53.210830 kernel: loop4: detected capacity change from 0 to 211296 Jan 15 14:02:53.236878 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jan 15 14:02:53.236910 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jan 15 14:02:53.260810 kernel: loop5: detected capacity change from 0 to 142488 Jan 15 14:02:53.268432 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 14:02:53.304788 kernel: loop6: detected capacity change from 0 to 140768 Jan 15 14:02:53.352506 kernel: loop7: detected capacity change from 0 to 8 Jan 15 14:02:53.368488 (sd-merge)[1218]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 15 14:02:53.369588 (sd-merge)[1218]: Merged extensions into '/usr'. Jan 15 14:02:53.378253 systemd[1]: Reloading requested from client PID 1192 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 14:02:53.378343 systemd[1]: Reloading... Jan 15 14:02:53.643815 zram_generator::config[1246]: No configuration found. Jan 15 14:02:53.728559 ldconfig[1187]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 14:02:53.891330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 14:02:53.958048 systemd[1]: Reloading finished in 578 ms. Jan 15 14:02:53.997791 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 14:02:54.002807 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 14:02:54.013154 systemd[1]: Starting ensure-sysext.service... Jan 15 14:02:54.020994 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 14:02:54.033077 systemd[1]: Reloading requested from client PID 1302 ('systemctl') (unit ensure-sysext.service)... Jan 15 14:02:54.033100 systemd[1]: Reloading... Jan 15 14:02:54.179803 zram_generator::config[1336]: No configuration found. Jan 15 14:02:54.201590 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 14:02:54.202237 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 15 14:02:54.206139 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 15 14:02:54.206552 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jan 15 14:02:54.206684 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jan 15 14:02:54.219878 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 14:02:54.219899 systemd-tmpfiles[1303]: Skipping /boot Jan 15 14:02:54.252558 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 14:02:54.252580 systemd-tmpfiles[1303]: Skipping /boot Jan 15 14:02:54.395660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 14:02:54.460069 systemd[1]: Reloading finished in 426 ms. Jan 15 14:02:54.485569 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 14:02:54.492482 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 14:02:54.507030 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 14:02:54.512040 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 14:02:54.516164 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 14:02:54.523014 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 14:02:54.530170 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 14:02:54.536005 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 14:02:54.546231 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:02:54.546601 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 14:02:54.555254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 14:02:54.559130 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 14:02:54.572632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 14:02:54.574469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 14:02:54.574660 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:02:54.581287 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:02:54.581581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 14:02:54.582897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 14:02:54.594238 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 14:02:54.601930 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:02:54.610428 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:02:54.611633 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 14:02:54.637236 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 14:02:54.638405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 14:02:54.638628 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 14:02:54.643700 systemd[1]: Finished ensure-sysext.service. Jan 15 14:02:54.649366 systemd-udevd[1393]: Using default interface naming scheme 'v255'. Jan 15 14:02:54.665055 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 15 14:02:54.685375 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 14:02:54.687303 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 14:02:54.690992 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 14:02:54.691352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 14:02:54.692753 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 14:02:54.694544 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 14:02:54.695864 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 14:02:54.696175 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 14:02:54.706218 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 14:02:54.710000 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 14:02:54.710261 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 14:02:54.712314 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 14:02:54.712491 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 14:02:54.723105 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 14:02:54.724404 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 14:02:54.724929 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 14:02:54.736577 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 14:02:54.786675 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 14:02:54.802698 augenrules[1438]: No rules Jan 15 14:02:54.805937 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 14:02:54.813809 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 14:02:54.978378 systemd-networkd[1423]: lo: Link UP Jan 15 14:02:54.978981 systemd-networkd[1423]: lo: Gained carrier Jan 15 14:02:54.980475 systemd-networkd[1423]: Enumeration completed Jan 15 14:02:54.985058 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 14:02:54.991743 systemd-resolved[1392]: Positive Trust Anchors: Jan 15 14:02:54.991789 systemd-resolved[1392]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 14:02:54.991833 systemd-resolved[1392]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 14:02:54.993054 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 14:02:54.993975 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 15 14:02:54.994939 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 14:02:55.002407 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 15 14:02:55.007526 systemd-resolved[1392]: Using system hostname 'srv-6ftsm.gb1.brightbox.com'. Jan 15 14:02:55.011400 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 14:02:55.012399 systemd[1]: Reached target network.target - Network. Jan 15 14:02:55.013058 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 14:02:55.054823 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1420) Jan 15 14:02:55.149840 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 14:02:55.149855 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 14:02:55.155154 systemd-networkd[1423]: eth0: Link UP Jan 15 14:02:55.155430 systemd-networkd[1423]: eth0: Gained carrier Jan 15 14:02:55.155576 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 14:02:55.182788 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 15 14:02:55.183926 systemd-networkd[1423]: eth0: DHCPv4 address 10.230.66.178/30, gateway 10.230.66.177 acquired from 10.230.66.177 Jan 15 14:02:55.186149 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 15 14:02:55.196253 kernel: ACPI: button: Power Button [PWRF] Jan 15 14:02:55.201647 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 14:02:55.211464 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 14:02:55.214811 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 14:02:55.254561 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 14:02:55.310380 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 15 14:02:55.320018 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 15 14:02:55.320074 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 15 14:02:55.320497 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 15 14:02:55.364918 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 14:02:55.484068 systemd-timesyncd[1406]: Contacted time server 217.114.59.3:123 (0.flatcar.pool.ntp.org). Jan 15 14:02:55.498488 systemd-timesyncd[1406]: Initial clock synchronization to Wed 2025-01-15 14:02:55.476320 UTC. Jan 15 14:02:55.612266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 14:02:55.615438 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 15 14:02:55.624098 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 15 14:02:55.656455 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 14:02:55.698628 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 15 14:02:55.700065 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 14:02:55.701216 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 14:02:55.702198 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 14:02:55.703050 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 14:02:55.704605 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 14:02:55.705511 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 14:02:55.706373 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 14:02:55.707155 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 14:02:55.707206 systemd[1]: Reached target paths.target - Path Units. Jan 15 14:02:55.707869 systemd[1]: Reached target timers.target - Timer Units. Jan 15 14:02:55.710355 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 14:02:55.713611 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 14:02:55.719346 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 14:02:55.722225 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 15 14:02:55.723834 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 14:02:55.724677 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 14:02:55.725352 systemd[1]: Reached target basic.target - Basic System. Jan 15 14:02:55.726081 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 14:02:55.726130 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 14:02:55.734999 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 14:02:55.741001 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 15 14:02:55.743660 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 14:02:55.744038 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 14:02:55.747914 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 14:02:55.758085 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 14:02:55.758928 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 14:02:55.763970 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 14:02:55.773860 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 14:02:55.777263 jq[1483]: false Jan 15 14:02:55.785052 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 14:02:55.794139 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 14:02:55.810311 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 14:02:55.814663 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 14:02:55.815567 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 14:02:55.825077 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 14:02:55.830927 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 14:02:55.835397 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 15 14:02:55.842477 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 14:02:55.843632 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 14:02:55.850393 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 14:02:55.850669 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 14:02:55.905179 jq[1492]: true Jan 15 14:02:55.951367 update_engine[1490]: I20250115 14:02:55.944957 1490 main.cc:92] Flatcar Update Engine starting Jan 15 14:02:55.958069 extend-filesystems[1484]: Found loop4 Jan 15 14:02:55.958069 extend-filesystems[1484]: Found loop5 Jan 15 14:02:55.958069 extend-filesystems[1484]: Found loop6 Jan 15 14:02:55.958069 extend-filesystems[1484]: Found loop7 Jan 15 14:02:55.958069 extend-filesystems[1484]: Found vda Jan 15 14:02:55.965965 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 14:02:55.963734 dbus-daemon[1482]: [system] SELinux support is enabled Jan 15 14:02:55.989880 extend-filesystems[1484]: Found vda1 Jan 15 14:02:55.989880 extend-filesystems[1484]: Found vda2 Jan 15 14:02:55.989880 extend-filesystems[1484]: Found vda3 Jan 15 14:02:55.989880 extend-filesystems[1484]: Found usr Jan 15 14:02:55.989880 extend-filesystems[1484]: Found vda4 Jan 15 14:02:55.989880 extend-filesystems[1484]: Found vda6 Jan 15 14:02:55.989880 extend-filesystems[1484]: Found vda7 Jan 15 14:02:55.989880 extend-filesystems[1484]: Found vda9 Jan 15 14:02:55.989880 extend-filesystems[1484]: Checking size of /dev/vda9 Jan 15 14:02:55.966999 (ntainerd)[1507]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 15 14:02:55.985157 dbus-daemon[1482]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1423 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 15 14:02:56.093058 tar[1496]: linux-amd64/helm Jan 15 14:02:56.093483 update_engine[1490]: I20250115 14:02:56.004473 1490 update_check_scheduler.cc:74] Next update check in 6m49s Jan 15 14:02:56.093567 extend-filesystems[1484]: Resized partition /dev/vda9 Jan 15 14:02:56.094392 jq[1508]: true Jan 15 14:02:55.980651 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 14:02:55.990865 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 15 14:02:56.106441 extend-filesystems[1522]: resize2fs 1.47.1 (20-May-2024) Jan 15 14:02:55.982000 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 14:02:55.984383 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 14:02:55.984444 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 14:02:55.985339 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 14:02:55.985367 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 14:02:56.002347 systemd[1]: Started update-engine.service - Update Engine. Jan 15 14:02:56.037039 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 15 14:02:56.100100 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 14:02:56.195514 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1421) Jan 15 14:02:56.195822 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 15 14:02:56.251239 systemd-logind[1489]: Watching system buttons on /dev/input/event2 (Power Button) Jan 15 14:02:56.252050 systemd-logind[1489]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 15 14:02:56.267168 systemd-logind[1489]: New seat seat0. Jan 15 14:02:56.270051 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 14:02:56.396279 bash[1540]: Updated "/home/core/.ssh/authorized_keys" Jan 15 14:02:56.414993 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 14:02:56.435206 systemd[1]: Starting sshkeys.service... Jan 15 14:02:56.469393 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 15 14:02:56.477234 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 15 14:02:56.698549 locksmithd[1524]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 14:02:56.699986 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 15 14:02:56.700229 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 15 14:02:56.703289 dbus-daemon[1482]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1523 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 15 14:02:56.715251 systemd[1]: Starting polkit.service - Authorization Manager... Jan 15 14:02:56.736736 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 15 14:02:56.775705 polkitd[1550]: Started polkitd version 121 Jan 15 14:02:56.780391 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 15 14:02:56.780391 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 15 14:02:56.780391 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 15 14:02:56.792399 extend-filesystems[1484]: Resized filesystem in /dev/vda9 Jan 15 14:02:56.784744 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 14:02:56.787275 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 14:02:56.799476 polkitd[1550]: Loading rules from directory /etc/polkit-1/rules.d Jan 15 14:02:56.799602 polkitd[1550]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 15 14:02:56.805160 polkitd[1550]: Finished loading, compiling and executing 2 rules Jan 15 14:02:56.817542 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 15 14:02:56.817826 systemd[1]: Started polkit.service - Authorization Manager. Jan 15 14:02:56.818655 polkitd[1550]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 15 14:02:56.863140 systemd-hostnamed[1523]: Hostname set to (static) Jan 15 14:02:57.067101 systemd-networkd[1423]: eth0: Gained IPv6LL Jan 15 14:02:57.085973 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 14:02:57.089838 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 14:02:57.101385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:02:57.104259 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 14:02:57.178964 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 14:02:57.190636 containerd[1507]: time="2025-01-15T14:02:57.188177977Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 15 14:02:57.295637 containerd[1507]: time="2025-01-15T14:02:57.295537676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 15 14:02:57.308209 containerd[1507]: time="2025-01-15T14:02:57.306696936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 15 14:02:57.308209 containerd[1507]: time="2025-01-15T14:02:57.306890150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 15 14:02:57.308209 containerd[1507]: time="2025-01-15T14:02:57.306926287Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 15 14:02:57.308209 containerd[1507]: time="2025-01-15T14:02:57.307686144Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 15 14:02:57.308209 containerd[1507]: time="2025-01-15T14:02:57.307748590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 15 14:02:57.308209 containerd[1507]: time="2025-01-15T14:02:57.307987491Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 14:02:57.308209 containerd[1507]: time="2025-01-15T14:02:57.308030747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 15 14:02:57.309120 containerd[1507]: time="2025-01-15T14:02:57.309087080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 14:02:57.309868 containerd[1507]: time="2025-01-15T14:02:57.309841667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 15 14:02:57.311821 containerd[1507]: time="2025-01-15T14:02:57.309972085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 14:02:57.311821 containerd[1507]: time="2025-01-15T14:02:57.310020488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 15 14:02:57.311821 containerd[1507]: time="2025-01-15T14:02:57.310182297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 15 14:02:57.311821 containerd[1507]: time="2025-01-15T14:02:57.310753136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 15 14:02:57.312190 containerd[1507]: time="2025-01-15T14:02:57.312160117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 14:02:57.312279 containerd[1507]: time="2025-01-15T14:02:57.312257555Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 15 14:02:57.312538 containerd[1507]: time="2025-01-15T14:02:57.312512729Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 15 14:02:57.313056 containerd[1507]: time="2025-01-15T14:02:57.313029027Z" level=info msg="metadata content store policy set" policy=shared Jan 15 14:02:57.320734 containerd[1507]: time="2025-01-15T14:02:57.320701182Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 15 14:02:57.320950 containerd[1507]: time="2025-01-15T14:02:57.320924730Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 15 14:02:57.321105 containerd[1507]: time="2025-01-15T14:02:57.321080665Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 15 14:02:57.323906 containerd[1507]: time="2025-01-15T14:02:57.322809594Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 15 14:02:57.323906 containerd[1507]: time="2025-01-15T14:02:57.322856817Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 15 14:02:57.323906 containerd[1507]: time="2025-01-15T14:02:57.323149660Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 15 14:02:57.323906 containerd[1507]: time="2025-01-15T14:02:57.323610335Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 15 14:02:57.324777 containerd[1507]: time="2025-01-15T14:02:57.324487826Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 15 14:02:57.324777 containerd[1507]: time="2025-01-15T14:02:57.324520126Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 15 14:02:57.324777 containerd[1507]: time="2025-01-15T14:02:57.324543759Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 15 14:02:57.324777 containerd[1507]: time="2025-01-15T14:02:57.324565308Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 15 14:02:57.324777 containerd[1507]: time="2025-01-15T14:02:57.324586156Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 15 14:02:57.324777 containerd[1507]: time="2025-01-15T14:02:57.324612738Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 15 14:02:57.324777 containerd[1507]: time="2025-01-15T14:02:57.324633171Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 15 14:02:57.324777 containerd[1507]: time="2025-01-15T14:02:57.324654729Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 15 14:02:57.324777 containerd[1507]: time="2025-01-15T14:02:57.324674456Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 15 14:02:57.324777 containerd[1507]: time="2025-01-15T14:02:57.324692952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 15 14:02:57.324777 containerd[1507]: time="2025-01-15T14:02:57.324738578Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 15 14:02:57.327036 containerd[1507]: time="2025-01-15T14:02:57.325433686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.327036 containerd[1507]: time="2025-01-15T14:02:57.325471454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.327036 containerd[1507]: time="2025-01-15T14:02:57.325492873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.327036 containerd[1507]: time="2025-01-15T14:02:57.325513344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.327036 containerd[1507]: time="2025-01-15T14:02:57.325532578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.327036 containerd[1507]: time="2025-01-15T14:02:57.325551591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.327036 containerd[1507]: time="2025-01-15T14:02:57.325570979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.327036 containerd[1507]: time="2025-01-15T14:02:57.325590495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.327036 containerd[1507]: time="2025-01-15T14:02:57.325609710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.327036 containerd[1507]: time="2025-01-15T14:02:57.325630722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.327036 containerd[1507]: time="2025-01-15T14:02:57.325648320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.327036 containerd[1507]: time="2025-01-15T14:02:57.325667263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.327036 containerd[1507]: time="2025-01-15T14:02:57.325687590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.329173 containerd[1507]: time="2025-01-15T14:02:57.325736832Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 15 14:02:57.329173 containerd[1507]: time="2025-01-15T14:02:57.327858684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.332053 containerd[1507]: time="2025-01-15T14:02:57.332016979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.332120 containerd[1507]: time="2025-01-15T14:02:57.332075257Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 15 14:02:57.332219 containerd[1507]: time="2025-01-15T14:02:57.332193906Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 15 14:02:57.332382 containerd[1507]: time="2025-01-15T14:02:57.332342634Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 15 14:02:57.332382 containerd[1507]: time="2025-01-15T14:02:57.332376654Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 15 14:02:57.332486 containerd[1507]: time="2025-01-15T14:02:57.332399176Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 15 14:02:57.332486 containerd[1507]: time="2025-01-15T14:02:57.332415438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.332486 containerd[1507]: time="2025-01-15T14:02:57.332434801Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 15 14:02:57.332486 containerd[1507]: time="2025-01-15T14:02:57.332459554Z" level=info msg="NRI interface is disabled by configuration." Jan 15 14:02:57.332486 containerd[1507]: time="2025-01-15T14:02:57.332478368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 15 14:02:57.335498 containerd[1507]: time="2025-01-15T14:02:57.334992987Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 15 14:02:57.335498 containerd[1507]: time="2025-01-15T14:02:57.335086897Z" level=info msg="Connect containerd service" Jan 15 14:02:57.335498 containerd[1507]: time="2025-01-15T14:02:57.335145690Z" level=info msg="using legacy CRI server" Jan 15 14:02:57.335498 containerd[1507]: time="2025-01-15T14:02:57.335164302Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 14:02:57.335498 containerd[1507]: time="2025-01-15T14:02:57.335338958Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 15 14:02:57.336974 containerd[1507]: time="2025-01-15T14:02:57.336913631Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 14:02:57.340485 containerd[1507]: time="2025-01-15T14:02:57.339904794Z" level=info msg="Start subscribing containerd event" Jan 15 14:02:57.340485 containerd[1507]: time="2025-01-15T14:02:57.339984815Z" level=info msg="Start recovering state" Jan 15 14:02:57.340485 containerd[1507]: time="2025-01-15T14:02:57.340096992Z" level=info msg="Start event monitor" Jan 15 14:02:57.340485 containerd[1507]: time="2025-01-15T14:02:57.340131357Z" level=info msg="Start snapshots syncer" Jan 15 14:02:57.340485 containerd[1507]: time="2025-01-15T14:02:57.340148645Z" level=info msg="Start cni network conf syncer for default" Jan 15 14:02:57.340485 containerd[1507]: time="2025-01-15T14:02:57.340161807Z" level=info msg="Start streaming server" Jan 15 14:02:57.342437 containerd[1507]: time="2025-01-15T14:02:57.342408631Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 14:02:57.342627 containerd[1507]: time="2025-01-15T14:02:57.342602995Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 14:02:57.349007 containerd[1507]: time="2025-01-15T14:02:57.347691352Z" level=info msg="containerd successfully booted in 0.166601s" Jan 15 14:02:57.347974 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 14:02:57.391786 sshd_keygen[1504]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 14:02:57.444938 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 14:02:57.572465 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 14:02:57.593088 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 14:02:57.593411 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 14:02:57.603879 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 14:02:57.634479 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 14:02:57.654735 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 14:02:57.666127 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 15 14:02:57.667344 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 14:02:57.765705 tar[1496]: linux-amd64/LICENSE Jan 15 14:02:57.768798 tar[1496]: linux-amd64/README.md Jan 15 14:02:57.786899 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 14:02:58.614918 systemd-networkd[1423]: eth0: Ignoring DHCPv6 address 2a02:1348:179:90ac:24:19ff:fee6:42b2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:90ac:24:19ff:fee6:42b2/64 assigned by NDisc. Jan 15 14:02:58.614946 systemd-networkd[1423]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 15 14:02:58.799123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:02:58.817583 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 14:02:59.732393 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 14:02:59.752859 systemd[1]: Started sshd@0-10.230.66.178:22-147.75.109.163:33542.service - OpenSSH per-connection server daemon (147.75.109.163:33542). Jan 15 14:02:59.820306 kubelet[1607]: E0115 14:02:59.819987 1607 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 14:02:59.823490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 14:02:59.823859 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 14:02:59.824608 systemd[1]: kubelet.service: Consumed 1.642s CPU time. Jan 15 14:03:00.675965 sshd[1616]: Accepted publickey for core from 147.75.109.163 port 33542 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:03:00.678989 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:03:00.700599 systemd-logind[1489]: New session 1 of user core. Jan 15 14:03:00.703660 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 14:03:00.721460 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 14:03:00.746197 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 14:03:00.762441 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 14:03:00.769226 (systemd)[1622]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 14:03:00.923690 systemd[1622]: Queued start job for default target default.target. Jan 15 14:03:00.930880 systemd[1622]: Created slice app.slice - User Application Slice. Jan 15 14:03:00.930924 systemd[1622]: Reached target paths.target - Paths. Jan 15 14:03:00.930948 systemd[1622]: Reached target timers.target - Timers. Jan 15 14:03:00.933680 systemd[1622]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 14:03:00.962469 systemd[1622]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 14:03:00.962668 systemd[1622]: Reached target sockets.target - Sockets. Jan 15 14:03:00.962695 systemd[1622]: Reached target basic.target - Basic System. Jan 15 14:03:00.962793 systemd[1622]: Reached target default.target - Main User Target. Jan 15 14:03:00.962865 systemd[1622]: Startup finished in 180ms. Jan 15 14:03:00.963172 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 14:03:00.976115 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 14:03:01.610305 systemd[1]: Started sshd@1-10.230.66.178:22-147.75.109.163:33544.service - OpenSSH per-connection server daemon (147.75.109.163:33544). Jan 15 14:03:02.505488 sshd[1633]: Accepted publickey for core from 147.75.109.163 port 33544 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:03:02.507675 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:03:02.515472 systemd-logind[1489]: New session 2 of user core. Jan 15 14:03:02.523058 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 14:03:02.735539 login[1597]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 15 14:03:02.738466 login[1596]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 15 14:03:02.743148 systemd-logind[1489]: New session 3 of user core. Jan 15 14:03:02.753155 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 14:03:02.756728 systemd-logind[1489]: New session 4 of user core. Jan 15 14:03:02.764048 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 14:03:03.124169 sshd[1633]: pam_unix(sshd:session): session closed for user core Jan 15 14:03:03.130033 systemd-logind[1489]: Session 2 logged out. Waiting for processes to exit. Jan 15 14:03:03.131194 systemd[1]: sshd@1-10.230.66.178:22-147.75.109.163:33544.service: Deactivated successfully. Jan 15 14:03:03.134387 systemd[1]: session-2.scope: Deactivated successfully. Jan 15 14:03:03.136231 systemd-logind[1489]: Removed session 2. Jan 15 14:03:03.181705 coreos-metadata[1481]: Jan 15 14:03:03.181 WARN failed to locate config-drive, using the metadata service API instead Jan 15 14:03:03.207666 coreos-metadata[1481]: Jan 15 14:03:03.207 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 15 14:03:03.213412 coreos-metadata[1481]: Jan 15 14:03:03.213 INFO Fetch failed with 404: resource not found Jan 15 14:03:03.213412 coreos-metadata[1481]: Jan 15 14:03:03.213 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 15 14:03:03.214573 coreos-metadata[1481]: Jan 15 14:03:03.214 INFO Fetch successful Jan 15 14:03:03.214677 coreos-metadata[1481]: Jan 15 14:03:03.214 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 15 14:03:03.226366 coreos-metadata[1481]: Jan 15 14:03:03.226 INFO Fetch successful Jan 15 14:03:03.226366 coreos-metadata[1481]: Jan 15 14:03:03.226 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 15 14:03:03.240524 coreos-metadata[1481]: Jan 15 14:03:03.240 INFO Fetch successful Jan 15 14:03:03.240524 coreos-metadata[1481]: Jan 15 14:03:03.240 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 15 14:03:03.258870 coreos-metadata[1481]: Jan 15 14:03:03.258 INFO Fetch successful Jan 15 14:03:03.258870 coreos-metadata[1481]: Jan 15 14:03:03.258 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 15 14:03:03.276377 coreos-metadata[1481]: Jan 15 14:03:03.276 INFO Fetch successful Jan 15 14:03:03.287270 systemd[1]: Started sshd@2-10.230.66.178:22-147.75.109.163:33556.service - OpenSSH per-connection server daemon (147.75.109.163:33556). Jan 15 14:03:03.303023 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 15 14:03:03.303911 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 14:03:03.686300 coreos-metadata[1543]: Jan 15 14:03:03.686 WARN failed to locate config-drive, using the metadata service API instead Jan 15 14:03:03.708290 coreos-metadata[1543]: Jan 15 14:03:03.708 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 15 14:03:03.736813 coreos-metadata[1543]: Jan 15 14:03:03.736 INFO Fetch successful Jan 15 14:03:03.737025 coreos-metadata[1543]: Jan 15 14:03:03.736 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 15 14:03:03.780470 coreos-metadata[1543]: Jan 15 14:03:03.780 INFO Fetch successful Jan 15 14:03:03.783317 unknown[1543]: wrote ssh authorized keys file for user: core Jan 15 14:03:03.821810 update-ssh-keys[1677]: Updated "/home/core/.ssh/authorized_keys" Jan 15 14:03:03.822957 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 15 14:03:03.826635 systemd[1]: Finished sshkeys.service. Jan 15 14:03:03.829798 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 14:03:03.831908 systemd[1]: Startup finished in 1.829s (kernel) + 16.514s (initrd) + 12.615s (userspace) = 30.959s. Jan 15 14:03:04.187358 sshd[1670]: Accepted publickey for core from 147.75.109.163 port 33556 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:03:04.190696 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:03:04.200330 systemd-logind[1489]: New session 5 of user core. Jan 15 14:03:04.211072 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 14:03:04.812006 sshd[1670]: pam_unix(sshd:session): session closed for user core Jan 15 14:03:04.816309 systemd-logind[1489]: Session 5 logged out. Waiting for processes to exit. Jan 15 14:03:04.816708 systemd[1]: sshd@2-10.230.66.178:22-147.75.109.163:33556.service: Deactivated successfully. Jan 15 14:03:04.819302 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 14:03:04.821366 systemd-logind[1489]: Removed session 5. Jan 15 14:03:10.007273 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 14:03:10.015111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:03:10.365886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:03:10.379311 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 14:03:10.488298 kubelet[1693]: E0115 14:03:10.488192 1693 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 14:03:10.493959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 14:03:10.494264 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 14:03:14.965896 systemd[1]: Started sshd@3-10.230.66.178:22-147.75.109.163:52566.service - OpenSSH per-connection server daemon (147.75.109.163:52566). Jan 15 14:03:15.865313 sshd[1702]: Accepted publickey for core from 147.75.109.163 port 52566 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:03:15.867451 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:03:15.873879 systemd-logind[1489]: New session 6 of user core. Jan 15 14:03:15.879973 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 14:03:16.486642 sshd[1702]: pam_unix(sshd:session): session closed for user core Jan 15 14:03:16.492252 systemd[1]: sshd@3-10.230.66.178:22-147.75.109.163:52566.service: Deactivated successfully. Jan 15 14:03:16.494873 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 14:03:16.496140 systemd-logind[1489]: Session 6 logged out. Waiting for processes to exit. Jan 15 14:03:16.497376 systemd-logind[1489]: Removed session 6. Jan 15 14:03:16.655666 systemd[1]: Started sshd@4-10.230.66.178:22-147.75.109.163:52578.service - OpenSSH per-connection server daemon (147.75.109.163:52578). Jan 15 14:03:17.538697 sshd[1709]: Accepted publickey for core from 147.75.109.163 port 52578 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:03:17.541122 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:03:17.551302 systemd-logind[1489]: New session 7 of user core. Jan 15 14:03:17.562061 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 14:03:18.197740 sshd[1709]: pam_unix(sshd:session): session closed for user core Jan 15 14:03:18.201909 systemd-logind[1489]: Session 7 logged out. Waiting for processes to exit. Jan 15 14:03:18.202460 systemd[1]: sshd@4-10.230.66.178:22-147.75.109.163:52578.service: Deactivated successfully. Jan 15 14:03:18.204564 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 14:03:18.206496 systemd-logind[1489]: Removed session 7. Jan 15 14:03:18.365203 systemd[1]: Started sshd@5-10.230.66.178:22-147.75.109.163:60090.service - OpenSSH per-connection server daemon (147.75.109.163:60090). Jan 15 14:03:19.250848 sshd[1716]: Accepted publickey for core from 147.75.109.163 port 60090 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:03:19.253052 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:03:19.261419 systemd-logind[1489]: New session 8 of user core. Jan 15 14:03:19.268107 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 14:03:19.874947 sshd[1716]: pam_unix(sshd:session): session closed for user core Jan 15 14:03:19.879370 systemd[1]: sshd@5-10.230.66.178:22-147.75.109.163:60090.service: Deactivated successfully. Jan 15 14:03:19.881686 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 14:03:19.884007 systemd-logind[1489]: Session 8 logged out. Waiting for processes to exit. Jan 15 14:03:19.885463 systemd-logind[1489]: Removed session 8. Jan 15 14:03:20.038275 systemd[1]: Started sshd@6-10.230.66.178:22-147.75.109.163:60106.service - OpenSSH per-connection server daemon (147.75.109.163:60106). Jan 15 14:03:20.506893 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 14:03:20.524267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:03:20.832139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:03:20.832686 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 14:03:20.917985 sshd[1723]: Accepted publickey for core from 147.75.109.163 port 60106 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:03:20.920684 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:03:20.928925 systemd-logind[1489]: New session 9 of user core. Jan 15 14:03:20.931115 kubelet[1733]: E0115 14:03:20.931049 1733 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 14:03:20.936025 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 14:03:20.937500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 14:03:20.937790 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 14:03:21.408678 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 14:03:21.409230 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 14:03:21.426155 sudo[1742]: pam_unix(sudo:session): session closed for user root Jan 15 14:03:21.571823 sshd[1723]: pam_unix(sshd:session): session closed for user core Jan 15 14:03:21.579207 systemd[1]: sshd@6-10.230.66.178:22-147.75.109.163:60106.service: Deactivated successfully. Jan 15 14:03:21.583510 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 14:03:21.585825 systemd-logind[1489]: Session 9 logged out. Waiting for processes to exit. Jan 15 14:03:21.588342 systemd-logind[1489]: Removed session 9. Jan 15 14:03:21.730262 systemd[1]: Started sshd@7-10.230.66.178:22-147.75.109.163:60122.service - OpenSSH per-connection server daemon (147.75.109.163:60122). Jan 15 14:03:22.633023 sshd[1747]: Accepted publickey for core from 147.75.109.163 port 60122 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:03:22.635423 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:03:22.643231 systemd-logind[1489]: New session 10 of user core. Jan 15 14:03:22.650035 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 14:03:23.112780 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 14:03:23.113269 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 14:03:23.120178 sudo[1751]: pam_unix(sudo:session): session closed for user root Jan 15 14:03:23.128748 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 15 14:03:23.129226 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 14:03:23.151361 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 15 14:03:23.155564 auditctl[1754]: No rules Jan 15 14:03:23.156260 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 14:03:23.156628 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 15 14:03:23.172428 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 14:03:23.213480 augenrules[1772]: No rules Jan 15 14:03:23.214382 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 14:03:23.216395 sudo[1750]: pam_unix(sudo:session): session closed for user root Jan 15 14:03:23.360832 sshd[1747]: pam_unix(sshd:session): session closed for user core Jan 15 14:03:23.366403 systemd[1]: sshd@7-10.230.66.178:22-147.75.109.163:60122.service: Deactivated successfully. Jan 15 14:03:23.368698 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 14:03:23.369699 systemd-logind[1489]: Session 10 logged out. Waiting for processes to exit. Jan 15 14:03:23.371256 systemd-logind[1489]: Removed session 10. Jan 15 14:03:23.512951 systemd[1]: Started sshd@8-10.230.66.178:22-147.75.109.163:60126.service - OpenSSH per-connection server daemon (147.75.109.163:60126). Jan 15 14:03:24.409449 sshd[1780]: Accepted publickey for core from 147.75.109.163 port 60126 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:03:24.411702 sshd[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:03:24.418512 systemd-logind[1489]: New session 11 of user core. Jan 15 14:03:24.425078 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 14:03:24.887151 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 14:03:24.887670 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 14:03:25.661665 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 14:03:25.673629 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 14:03:26.436008 dockerd[1800]: time="2025-01-15T14:03:26.435823265Z" level=info msg="Starting up" Jan 15 14:03:26.595946 systemd[1]: var-lib-docker-metacopy\x2dcheck2373466959-merged.mount: Deactivated successfully. Jan 15 14:03:26.619795 dockerd[1800]: time="2025-01-15T14:03:26.619655857Z" level=info msg="Loading containers: start." Jan 15 14:03:26.789798 kernel: Initializing XFRM netlink socket Jan 15 14:03:26.896813 systemd-networkd[1423]: docker0: Link UP Jan 15 14:03:26.919776 dockerd[1800]: time="2025-01-15T14:03:26.919617304Z" level=info msg="Loading containers: done." Jan 15 14:03:26.945338 dockerd[1800]: time="2025-01-15T14:03:26.945276983Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 14:03:26.945649 dockerd[1800]: time="2025-01-15T14:03:26.945477235Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 15 14:03:26.945649 dockerd[1800]: time="2025-01-15T14:03:26.945622878Z" level=info msg="Daemon has completed initialization" Jan 15 14:03:26.982732 dockerd[1800]: time="2025-01-15T14:03:26.981816808Z" level=info msg="API listen on /run/docker.sock" Jan 15 14:03:26.982552 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 14:03:28.473085 containerd[1507]: time="2025-01-15T14:03:28.472957434Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 15 14:03:28.650925 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 15 14:03:29.286909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2414013809.mount: Deactivated successfully. Jan 15 14:03:31.007296 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 15 14:03:31.032128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:03:31.362844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:03:31.375485 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 14:03:31.561801 kubelet[2012]: E0115 14:03:31.560283 2012 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 14:03:31.567576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 14:03:31.567947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 14:03:32.343325 containerd[1507]: time="2025-01-15T14:03:32.342970495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:32.345691 containerd[1507]: time="2025-01-15T14:03:32.345269423Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Jan 15 14:03:32.346445 containerd[1507]: time="2025-01-15T14:03:32.346339974Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:32.350922 containerd[1507]: time="2025-01-15T14:03:32.350805363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:32.353223 containerd[1507]: time="2025-01-15T14:03:32.352663872Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.879367825s" Jan 15 14:03:32.353223 containerd[1507]: time="2025-01-15T14:03:32.352742646Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 15 14:03:32.401500 containerd[1507]: time="2025-01-15T14:03:32.401439524Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 15 14:03:35.811618 containerd[1507]: time="2025-01-15T14:03:35.810332525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:35.811618 containerd[1507]: time="2025-01-15T14:03:35.811120924Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Jan 15 14:03:35.815633 containerd[1507]: time="2025-01-15T14:03:35.815553458Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:35.821435 containerd[1507]: time="2025-01-15T14:03:35.821345206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:35.824531 containerd[1507]: time="2025-01-15T14:03:35.823355607Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 3.421840866s" Jan 15 14:03:35.824531 containerd[1507]: time="2025-01-15T14:03:35.823531019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 15 14:03:35.862416 containerd[1507]: time="2025-01-15T14:03:35.862316430Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 15 14:03:37.650896 containerd[1507]: time="2025-01-15T14:03:37.650238637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:37.652632 containerd[1507]: time="2025-01-15T14:03:37.652563169Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Jan 15 14:03:37.654187 containerd[1507]: time="2025-01-15T14:03:37.654112135Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:37.660163 containerd[1507]: time="2025-01-15T14:03:37.660076023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:37.662261 containerd[1507]: time="2025-01-15T14:03:37.661976247Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.799524186s" Jan 15 14:03:37.662261 containerd[1507]: time="2025-01-15T14:03:37.662041836Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 15 14:03:37.717220 containerd[1507]: time="2025-01-15T14:03:37.717135578Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 15 14:03:39.478074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2916779053.mount: Deactivated successfully. Jan 15 14:03:40.255315 containerd[1507]: time="2025-01-15T14:03:40.255214116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:40.256436 containerd[1507]: time="2025-01-15T14:03:40.256375862Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Jan 15 14:03:40.257573 containerd[1507]: time="2025-01-15T14:03:40.257511800Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:40.263413 containerd[1507]: time="2025-01-15T14:03:40.261299494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:40.263413 containerd[1507]: time="2025-01-15T14:03:40.262609067Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.545401355s" Jan 15 14:03:40.263413 containerd[1507]: time="2025-01-15T14:03:40.262646935Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 15 14:03:40.310507 containerd[1507]: time="2025-01-15T14:03:40.310398345Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 15 14:03:40.875036 update_engine[1490]: I20250115 14:03:40.874696 1490 update_attempter.cc:509] Updating boot flags... Jan 15 14:03:40.974123 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2062) Jan 15 14:03:41.030486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430326743.mount: Deactivated successfully. Jan 15 14:03:41.102243 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2064) Jan 15 14:03:41.757621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 15 14:03:41.768111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:03:42.408415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:03:42.418268 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 14:03:42.561261 kubelet[2121]: E0115 14:03:42.561166 2121 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 14:03:42.566434 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 14:03:42.566735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 14:03:42.620447 containerd[1507]: time="2025-01-15T14:03:42.619544577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:42.621513 containerd[1507]: time="2025-01-15T14:03:42.621352233Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 15 14:03:42.622344 containerd[1507]: time="2025-01-15T14:03:42.622292864Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:42.628224 containerd[1507]: time="2025-01-15T14:03:42.628157734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:42.630783 containerd[1507]: time="2025-01-15T14:03:42.630093912Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.319617305s" Jan 15 14:03:42.630783 containerd[1507]: time="2025-01-15T14:03:42.630195129Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 15 14:03:42.661421 containerd[1507]: time="2025-01-15T14:03:42.661246333Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 15 14:03:43.423580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389971425.mount: Deactivated successfully. Jan 15 14:03:43.430286 containerd[1507]: time="2025-01-15T14:03:43.430127020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:43.431936 containerd[1507]: time="2025-01-15T14:03:43.431869501Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 15 14:03:43.432576 containerd[1507]: time="2025-01-15T14:03:43.432273478Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:43.435828 containerd[1507]: time="2025-01-15T14:03:43.435750715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:43.437927 containerd[1507]: time="2025-01-15T14:03:43.437097625Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 775.421307ms" Jan 15 14:03:43.437927 containerd[1507]: time="2025-01-15T14:03:43.437179738Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 15 14:03:43.469571 containerd[1507]: time="2025-01-15T14:03:43.469519090Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 15 14:03:44.096863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059915407.mount: Deactivated successfully. Jan 15 14:03:48.827290 containerd[1507]: time="2025-01-15T14:03:48.827015130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:48.830012 containerd[1507]: time="2025-01-15T14:03:48.829946194Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 15 14:03:48.830949 containerd[1507]: time="2025-01-15T14:03:48.830866585Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:48.838154 containerd[1507]: time="2025-01-15T14:03:48.838036960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:03:48.840592 containerd[1507]: time="2025-01-15T14:03:48.840144829Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.37056316s" Jan 15 14:03:48.840592 containerd[1507]: time="2025-01-15T14:03:48.840282902Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 15 14:03:52.757940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 15 14:03:52.772105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:03:53.145067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:03:53.159712 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 14:03:53.515163 kubelet[2251]: E0115 14:03:53.514981 2251 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 14:03:53.521312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 14:03:53.521660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 14:03:54.107554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:03:54.119028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:03:54.140447 systemd[1]: Reloading requested from client PID 2266 ('systemctl') (unit session-11.scope)... Jan 15 14:03:54.140492 systemd[1]: Reloading... Jan 15 14:03:54.310898 zram_generator::config[2301]: No configuration found. Jan 15 14:03:54.505416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 14:03:54.614390 systemd[1]: Reloading finished in 472 ms. Jan 15 14:03:54.695486 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:03:54.701373 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 14:03:54.701780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:03:54.708176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:03:54.851751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:03:54.868366 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 14:03:54.992968 kubelet[2374]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 14:03:54.993861 kubelet[2374]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 14:03:54.993861 kubelet[2374]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 14:03:54.995173 kubelet[2374]: I0115 14:03:54.995115 2374 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 14:03:55.750619 kubelet[2374]: I0115 14:03:55.750544 2374 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 15 14:03:55.750619 kubelet[2374]: I0115 14:03:55.750592 2374 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 14:03:55.751146 kubelet[2374]: I0115 14:03:55.751112 2374 server.go:919] "Client rotation is on, will bootstrap in background" Jan 15 14:03:55.777403 kubelet[2374]: I0115 14:03:55.776823 2374 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 14:03:55.778947 kubelet[2374]: E0115 14:03:55.778914 2374 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.66.178:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:55.802242 kubelet[2374]: I0115 14:03:55.802201 2374 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 14:03:55.802869 kubelet[2374]: I0115 14:03:55.802837 2374 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 14:03:55.804047 kubelet[2374]: I0115 14:03:55.803995 2374 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 15 14:03:55.804665 kubelet[2374]: I0115 14:03:55.804619 2374 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 14:03:55.804665 kubelet[2374]: I0115 14:03:55.804654 2374 container_manager_linux.go:301] "Creating device plugin manager" Jan 15 14:03:55.804976 kubelet[2374]: I0115 14:03:55.804943 2374 state_mem.go:36] "Initialized new in-memory state store" Jan 15 14:03:55.806941 kubelet[2374]: W0115 14:03:55.806793 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.66.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-6ftsm.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:55.806941 kubelet[2374]: E0115 14:03:55.806908 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.66.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-6ftsm.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:55.807631 kubelet[2374]: I0115 14:03:55.807591 2374 kubelet.go:396] "Attempting to sync node with API server" Jan 15 14:03:55.807705 kubelet[2374]: I0115 14:03:55.807646 2374 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 14:03:55.809867 kubelet[2374]: I0115 14:03:55.809504 2374 kubelet.go:312] "Adding apiserver pod source" Jan 15 14:03:55.809867 kubelet[2374]: I0115 14:03:55.809562 2374 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 14:03:55.811391 kubelet[2374]: W0115 14:03:55.811314 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.66.178:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:55.811391 kubelet[2374]: E0115 14:03:55.811364 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.66.178:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:55.811799 kubelet[2374]: I0115 14:03:55.811731 2374 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 14:03:55.817327 kubelet[2374]: I0115 14:03:55.817146 2374 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 14:03:55.819790 kubelet[2374]: W0115 14:03:55.818584 2374 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 14:03:55.820954 kubelet[2374]: I0115 14:03:55.820927 2374 server.go:1256] "Started kubelet" Jan 15 14:03:55.821726 kubelet[2374]: I0115 14:03:55.821697 2374 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 14:03:55.823400 kubelet[2374]: I0115 14:03:55.823375 2374 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 14:03:55.824168 kubelet[2374]: I0115 14:03:55.824138 2374 server.go:461] "Adding debug handlers to kubelet server" Jan 15 14:03:55.834005 kubelet[2374]: I0115 14:03:55.833954 2374 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 14:03:55.834454 kubelet[2374]: I0115 14:03:55.834433 2374 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 14:03:55.837127 kubelet[2374]: I0115 14:03:55.837098 2374 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 15 14:03:55.837908 kubelet[2374]: I0115 14:03:55.837875 2374 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 15 14:03:55.838031 kubelet[2374]: I0115 14:03:55.838002 2374 reconciler_new.go:29] "Reconciler: start to sync state" Jan 15 14:03:55.839035 kubelet[2374]: W0115 14:03:55.838973 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.66.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:55.839156 kubelet[2374]: E0115 14:03:55.839047 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.66.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:55.839219 kubelet[2374]: E0115 14:03:55.839183 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-6ftsm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.178:6443: connect: connection refused" interval="200ms" Jan 15 14:03:55.842749 kubelet[2374]: E0115 14:03:55.842722 2374 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.66.178:6443/api/v1/namespaces/default/events\": dial tcp 10.230.66.178:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-6ftsm.gb1.brightbox.com.181ae2ac44c6ef5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-6ftsm.gb1.brightbox.com,UID:srv-6ftsm.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-6ftsm.gb1.brightbox.com,},FirstTimestamp:2025-01-15 14:03:55.820887903 +0000 UTC m=+0.895034270,LastTimestamp:2025-01-15 14:03:55.820887903 +0000 UTC m=+0.895034270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-6ftsm.gb1.brightbox.com,}" Jan 15 14:03:55.854824 kubelet[2374]: I0115 14:03:55.854795 2374 factory.go:221] Registration of the containerd container factory successfully Jan 15 14:03:55.854824 kubelet[2374]: I0115 14:03:55.854820 2374 factory.go:221] Registration of the systemd container factory successfully Jan 15 14:03:55.854998 kubelet[2374]: I0115 14:03:55.854928 2374 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 14:03:55.868886 kubelet[2374]: E0115 14:03:55.865531 2374 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 14:03:55.877487 kubelet[2374]: I0115 14:03:55.877436 2374 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 14:03:55.881139 kubelet[2374]: I0115 14:03:55.881098 2374 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 14:03:55.881441 kubelet[2374]: I0115 14:03:55.881380 2374 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 14:03:55.882255 kubelet[2374]: I0115 14:03:55.882234 2374 kubelet.go:2329] "Starting kubelet main sync loop" Jan 15 14:03:55.882523 kubelet[2374]: E0115 14:03:55.882493 2374 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 14:03:55.889218 kubelet[2374]: W0115 14:03:55.889168 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.66.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:55.889407 kubelet[2374]: E0115 14:03:55.889386 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.66.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:55.900877 kubelet[2374]: I0115 14:03:55.900843 2374 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 14:03:55.900877 kubelet[2374]: I0115 14:03:55.900875 2374 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 14:03:55.901118 kubelet[2374]: I0115 14:03:55.900915 2374 state_mem.go:36] "Initialized new in-memory state store" Jan 15 14:03:55.902893 kubelet[2374]: I0115 14:03:55.902846 2374 policy_none.go:49] "None policy: Start" Jan 15 14:03:55.904228 kubelet[2374]: I0115 14:03:55.904193 2374 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 14:03:55.904322 kubelet[2374]: I0115 14:03:55.904284 2374 state_mem.go:35] "Initializing new in-memory state store" Jan 15 14:03:55.931804 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 14:03:55.942141 kubelet[2374]: I0115 14:03:55.941508 2374 kubelet_node_status.go:73] "Attempting to register node" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:55.942141 kubelet[2374]: E0115 14:03:55.942108 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.66.178:6443/api/v1/nodes\": dial tcp 10.230.66.178:6443: connect: connection refused" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:55.947337 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 14:03:55.952933 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 14:03:55.966342 kubelet[2374]: I0115 14:03:55.965993 2374 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 14:03:55.966652 kubelet[2374]: I0115 14:03:55.966590 2374 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 14:03:55.973305 kubelet[2374]: E0115 14:03:55.973193 2374 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-6ftsm.gb1.brightbox.com\" not found" Jan 15 14:03:55.983201 kubelet[2374]: I0115 14:03:55.983158 2374 topology_manager.go:215] "Topology Admit Handler" podUID="e0a71ab4dc37df084239c96671e950f9" podNamespace="kube-system" podName="kube-controller-manager-srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:55.988734 kubelet[2374]: I0115 14:03:55.988429 2374 topology_manager.go:215] "Topology Admit Handler" podUID="9fecffe60c624bd9037a6813e8578d77" podNamespace="kube-system" podName="kube-scheduler-srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:55.991552 kubelet[2374]: I0115 14:03:55.991519 2374 topology_manager.go:215] "Topology Admit Handler" podUID="d00cb5a6f5756d215a10c3b277ffa4d5" podNamespace="kube-system" podName="kube-apiserver-srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.002517 systemd[1]: Created slice kubepods-burstable-pode0a71ab4dc37df084239c96671e950f9.slice - libcontainer container kubepods-burstable-pode0a71ab4dc37df084239c96671e950f9.slice. Jan 15 14:03:56.024943 systemd[1]: Created slice kubepods-burstable-pod9fecffe60c624bd9037a6813e8578d77.slice - libcontainer container kubepods-burstable-pod9fecffe60c624bd9037a6813e8578d77.slice. Jan 15 14:03:56.032200 systemd[1]: Created slice kubepods-burstable-podd00cb5a6f5756d215a10c3b277ffa4d5.slice - libcontainer container kubepods-burstable-podd00cb5a6f5756d215a10c3b277ffa4d5.slice. Jan 15 14:03:56.040499 kubelet[2374]: E0115 14:03:56.040428 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-6ftsm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.178:6443: connect: connection refused" interval="400ms" Jan 15 14:03:56.041189 kubelet[2374]: I0115 14:03:56.041151 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0a71ab4dc37df084239c96671e950f9-ca-certs\") pod \"kube-controller-manager-srv-6ftsm.gb1.brightbox.com\" (UID: \"e0a71ab4dc37df084239c96671e950f9\") " pod="kube-system/kube-controller-manager-srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.041426 kubelet[2374]: I0115 14:03:56.041347 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e0a71ab4dc37df084239c96671e950f9-flexvolume-dir\") pod \"kube-controller-manager-srv-6ftsm.gb1.brightbox.com\" (UID: \"e0a71ab4dc37df084239c96671e950f9\") " pod="kube-system/kube-controller-manager-srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.044382 kubelet[2374]: I0115 14:03:56.044336 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0a71ab4dc37df084239c96671e950f9-k8s-certs\") pod \"kube-controller-manager-srv-6ftsm.gb1.brightbox.com\" (UID: \"e0a71ab4dc37df084239c96671e950f9\") " pod="kube-system/kube-controller-manager-srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.044625 kubelet[2374]: I0115 14:03:56.044571 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e0a71ab4dc37df084239c96671e950f9-kubeconfig\") pod \"kube-controller-manager-srv-6ftsm.gb1.brightbox.com\" (UID: \"e0a71ab4dc37df084239c96671e950f9\") " pod="kube-system/kube-controller-manager-srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.045020 kubelet[2374]: I0115 14:03:56.044643 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d00cb5a6f5756d215a10c3b277ffa4d5-ca-certs\") pod \"kube-apiserver-srv-6ftsm.gb1.brightbox.com\" (UID: \"d00cb5a6f5756d215a10c3b277ffa4d5\") " pod="kube-system/kube-apiserver-srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.045020 kubelet[2374]: I0115 14:03:56.044694 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d00cb5a6f5756d215a10c3b277ffa4d5-usr-share-ca-certificates\") pod \"kube-apiserver-srv-6ftsm.gb1.brightbox.com\" (UID: \"d00cb5a6f5756d215a10c3b277ffa4d5\") " pod="kube-system/kube-apiserver-srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.045020 kubelet[2374]: I0115 14:03:56.044790 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0a71ab4dc37df084239c96671e950f9-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-6ftsm.gb1.brightbox.com\" (UID: \"e0a71ab4dc37df084239c96671e950f9\") " pod="kube-system/kube-controller-manager-srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.045020 kubelet[2374]: I0115 14:03:56.044845 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9fecffe60c624bd9037a6813e8578d77-kubeconfig\") pod \"kube-scheduler-srv-6ftsm.gb1.brightbox.com\" (UID: \"9fecffe60c624bd9037a6813e8578d77\") " pod="kube-system/kube-scheduler-srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.045020 kubelet[2374]: I0115 14:03:56.044889 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d00cb5a6f5756d215a10c3b277ffa4d5-k8s-certs\") pod \"kube-apiserver-srv-6ftsm.gb1.brightbox.com\" (UID: \"d00cb5a6f5756d215a10c3b277ffa4d5\") " pod="kube-system/kube-apiserver-srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.146840 kubelet[2374]: I0115 14:03:56.146547 2374 kubelet_node_status.go:73] "Attempting to register node" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.149861 kubelet[2374]: E0115 14:03:56.149823 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.66.178:6443/api/v1/nodes\": dial tcp 10.230.66.178:6443: connect: connection refused" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.324032 containerd[1507]: time="2025-01-15T14:03:56.323788172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-6ftsm.gb1.brightbox.com,Uid:e0a71ab4dc37df084239c96671e950f9,Namespace:kube-system,Attempt:0,}" Jan 15 14:03:56.334285 containerd[1507]: time="2025-01-15T14:03:56.334184899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-6ftsm.gb1.brightbox.com,Uid:9fecffe60c624bd9037a6813e8578d77,Namespace:kube-system,Attempt:0,}" Jan 15 14:03:56.337100 containerd[1507]: time="2025-01-15T14:03:56.337058843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-6ftsm.gb1.brightbox.com,Uid:d00cb5a6f5756d215a10c3b277ffa4d5,Namespace:kube-system,Attempt:0,}" Jan 15 14:03:56.441802 kubelet[2374]: E0115 14:03:56.441704 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-6ftsm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.178:6443: connect: connection refused" interval="800ms" Jan 15 14:03:56.554527 kubelet[2374]: I0115 14:03:56.554479 2374 kubelet_node_status.go:73] "Attempting to register node" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.554997 kubelet[2374]: E0115 14:03:56.554971 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.66.178:6443/api/v1/nodes\": dial tcp 10.230.66.178:6443: connect: connection refused" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:56.859395 kubelet[2374]: W0115 14:03:56.859277 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.66.178:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:56.859395 kubelet[2374]: E0115 14:03:56.859354 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.66.178:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:56.920838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount906999166.mount: Deactivated successfully. Jan 15 14:03:56.934440 containerd[1507]: time="2025-01-15T14:03:56.934376118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 14:03:56.936393 containerd[1507]: time="2025-01-15T14:03:56.936213818Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 14:03:56.937522 containerd[1507]: time="2025-01-15T14:03:56.937470875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 14:03:56.939323 containerd[1507]: time="2025-01-15T14:03:56.938838363Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 14:03:56.940398 containerd[1507]: time="2025-01-15T14:03:56.940270064Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 15 14:03:56.941456 containerd[1507]: time="2025-01-15T14:03:56.941239973Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 14:03:56.941456 containerd[1507]: time="2025-01-15T14:03:56.941335901Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 14:03:56.948577 containerd[1507]: time="2025-01-15T14:03:56.948495835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 14:03:56.950425 containerd[1507]: time="2025-01-15T14:03:56.950376438Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 626.182183ms" Jan 15 14:03:56.954495 containerd[1507]: time="2025-01-15T14:03:56.954402806Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 617.26437ms" Jan 15 14:03:56.962968 containerd[1507]: time="2025-01-15T14:03:56.962723225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 628.442192ms" Jan 15 14:03:57.067148 kubelet[2374]: W0115 14:03:57.067006 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.66.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-6ftsm.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:57.067854 kubelet[2374]: E0115 14:03:57.067180 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.66.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-6ftsm.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:57.201584 containerd[1507]: time="2025-01-15T14:03:57.200364273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:03:57.201584 containerd[1507]: time="2025-01-15T14:03:57.200450329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:03:57.201584 containerd[1507]: time="2025-01-15T14:03:57.200468182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:03:57.201584 containerd[1507]: time="2025-01-15T14:03:57.201398070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:03:57.208447 containerd[1507]: time="2025-01-15T14:03:57.208112232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:03:57.208447 containerd[1507]: time="2025-01-15T14:03:57.208253561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:03:57.208447 containerd[1507]: time="2025-01-15T14:03:57.208314619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:03:57.208986 containerd[1507]: time="2025-01-15T14:03:57.208483812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:03:57.219086 containerd[1507]: time="2025-01-15T14:03:57.218957749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:03:57.219267 containerd[1507]: time="2025-01-15T14:03:57.219104807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:03:57.219267 containerd[1507]: time="2025-01-15T14:03:57.219196686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:03:57.219663 containerd[1507]: time="2025-01-15T14:03:57.219402857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:03:57.243127 kubelet[2374]: E0115 14:03:57.243065 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-6ftsm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.178:6443: connect: connection refused" interval="1.6s" Jan 15 14:03:57.261040 systemd[1]: Started cri-containerd-c49abb1e68de3ccb7fe1de92ad4d8b9814a7273b6c07cb7ce5222766953fba7f.scope - libcontainer container c49abb1e68de3ccb7fe1de92ad4d8b9814a7273b6c07cb7ce5222766953fba7f. Jan 15 14:03:57.275108 systemd[1]: Started cri-containerd-64ef90dd73a9e5c9de621b86544f0cc5bbf0b48053c48271528f7701de2586ca.scope - libcontainer container 64ef90dd73a9e5c9de621b86544f0cc5bbf0b48053c48271528f7701de2586ca. Jan 15 14:03:57.279297 systemd[1]: Started cri-containerd-bbf98ccbb166e79e978b49043dd330d920dd79ad93aae9cb88f780fd978548dc.scope - libcontainer container bbf98ccbb166e79e978b49043dd330d920dd79ad93aae9cb88f780fd978548dc. Jan 15 14:03:57.297292 kubelet[2374]: W0115 14:03:57.297136 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.66.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:57.297292 kubelet[2374]: E0115 14:03:57.297237 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.66.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:57.369797 kubelet[2374]: I0115 14:03:57.367811 2374 kubelet_node_status.go:73] "Attempting to register node" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:57.369797 kubelet[2374]: E0115 14:03:57.368371 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.66.178:6443/api/v1/nodes\": dial tcp 10.230.66.178:6443: connect: connection refused" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:03:57.397922 containerd[1507]: time="2025-01-15T14:03:57.397858094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-6ftsm.gb1.brightbox.com,Uid:9fecffe60c624bd9037a6813e8578d77,Namespace:kube-system,Attempt:0,} returns sandbox id \"c49abb1e68de3ccb7fe1de92ad4d8b9814a7273b6c07cb7ce5222766953fba7f\"" Jan 15 14:03:57.410460 containerd[1507]: time="2025-01-15T14:03:57.410282546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-6ftsm.gb1.brightbox.com,Uid:d00cb5a6f5756d215a10c3b277ffa4d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbf98ccbb166e79e978b49043dd330d920dd79ad93aae9cb88f780fd978548dc\"" Jan 15 14:03:57.417341 containerd[1507]: time="2025-01-15T14:03:57.417300862Z" level=info msg="CreateContainer within sandbox \"c49abb1e68de3ccb7fe1de92ad4d8b9814a7273b6c07cb7ce5222766953fba7f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 14:03:57.417544 containerd[1507]: time="2025-01-15T14:03:57.417501166Z" level=info msg="CreateContainer within sandbox \"bbf98ccbb166e79e978b49043dd330d920dd79ad93aae9cb88f780fd978548dc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 14:03:57.422506 containerd[1507]: time="2025-01-15T14:03:57.422446065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-6ftsm.gb1.brightbox.com,Uid:e0a71ab4dc37df084239c96671e950f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"64ef90dd73a9e5c9de621b86544f0cc5bbf0b48053c48271528f7701de2586ca\"" Jan 15 14:03:57.427839 containerd[1507]: time="2025-01-15T14:03:57.427802428Z" level=info msg="CreateContainer within sandbox \"64ef90dd73a9e5c9de621b86544f0cc5bbf0b48053c48271528f7701de2586ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 14:03:57.446702 containerd[1507]: time="2025-01-15T14:03:57.446633087Z" level=info msg="CreateContainer within sandbox \"c49abb1e68de3ccb7fe1de92ad4d8b9814a7273b6c07cb7ce5222766953fba7f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e2c00a724dd6535fcc80b1e7f68308869e7d5b285486926cdb89eed056b3009d\"" Jan 15 14:03:57.447585 containerd[1507]: time="2025-01-15T14:03:57.447553171Z" level=info msg="StartContainer for \"e2c00a724dd6535fcc80b1e7f68308869e7d5b285486926cdb89eed056b3009d\"" Jan 15 14:03:57.450652 containerd[1507]: time="2025-01-15T14:03:57.450616494Z" level=info msg="CreateContainer within sandbox \"64ef90dd73a9e5c9de621b86544f0cc5bbf0b48053c48271528f7701de2586ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"33e0cb72a5b45c95239065daf4d663952643c1b27be0cdeefa132eb67b661431\"" Jan 15 14:03:57.451480 containerd[1507]: time="2025-01-15T14:03:57.451383749Z" level=info msg="StartContainer for \"33e0cb72a5b45c95239065daf4d663952643c1b27be0cdeefa132eb67b661431\"" Jan 15 14:03:57.456140 containerd[1507]: time="2025-01-15T14:03:57.455901184Z" level=info msg="CreateContainer within sandbox \"bbf98ccbb166e79e978b49043dd330d920dd79ad93aae9cb88f780fd978548dc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1f21a1f39d6f449522ea3d6d542576645098378377f31a4504dac8cb9ad5e610\"" Jan 15 14:03:57.457779 containerd[1507]: time="2025-01-15T14:03:57.457289974Z" level=info msg="StartContainer for \"1f21a1f39d6f449522ea3d6d542576645098378377f31a4504dac8cb9ad5e610\"" Jan 15 14:03:57.486431 kubelet[2374]: W0115 14:03:57.486266 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.66.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:57.486431 kubelet[2374]: E0115 14:03:57.486386 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.66.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:57.505175 systemd[1]: Started cri-containerd-33e0cb72a5b45c95239065daf4d663952643c1b27be0cdeefa132eb67b661431.scope - libcontainer container 33e0cb72a5b45c95239065daf4d663952643c1b27be0cdeefa132eb67b661431. Jan 15 14:03:57.518961 systemd[1]: Started cri-containerd-1f21a1f39d6f449522ea3d6d542576645098378377f31a4504dac8cb9ad5e610.scope - libcontainer container 1f21a1f39d6f449522ea3d6d542576645098378377f31a4504dac8cb9ad5e610. Jan 15 14:03:57.520753 systemd[1]: Started cri-containerd-e2c00a724dd6535fcc80b1e7f68308869e7d5b285486926cdb89eed056b3009d.scope - libcontainer container e2c00a724dd6535fcc80b1e7f68308869e7d5b285486926cdb89eed056b3009d. Jan 15 14:03:57.616169 kubelet[2374]: E0115 14:03:57.613638 2374 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.66.178:6443/api/v1/namespaces/default/events\": dial tcp 10.230.66.178:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-6ftsm.gb1.brightbox.com.181ae2ac44c6ef5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-6ftsm.gb1.brightbox.com,UID:srv-6ftsm.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-6ftsm.gb1.brightbox.com,},FirstTimestamp:2025-01-15 14:03:55.820887903 +0000 UTC m=+0.895034270,LastTimestamp:2025-01-15 14:03:55.820887903 +0000 UTC m=+0.895034270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-6ftsm.gb1.brightbox.com,}" Jan 15 14:03:57.644700 containerd[1507]: time="2025-01-15T14:03:57.644631441Z" level=info msg="StartContainer for \"33e0cb72a5b45c95239065daf4d663952643c1b27be0cdeefa132eb67b661431\" returns successfully" Jan 15 14:03:57.645312 containerd[1507]: time="2025-01-15T14:03:57.645178584Z" level=info msg="StartContainer for \"1f21a1f39d6f449522ea3d6d542576645098378377f31a4504dac8cb9ad5e610\" returns successfully" Jan 15 14:03:57.664229 containerd[1507]: time="2025-01-15T14:03:57.664179058Z" level=info msg="StartContainer for \"e2c00a724dd6535fcc80b1e7f68308869e7d5b285486926cdb89eed056b3009d\" returns successfully" Jan 15 14:03:57.941789 kubelet[2374]: E0115 14:03:57.941524 2374 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.66.178:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.66.178:6443: connect: connection refused Jan 15 14:03:58.972615 kubelet[2374]: I0115 14:03:58.972002 2374 kubelet_node_status.go:73] "Attempting to register node" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:00.517578 kubelet[2374]: E0115 14:04:00.516448 2374 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-6ftsm.gb1.brightbox.com\" not found" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:00.517578 kubelet[2374]: I0115 14:04:00.516667 2374 kubelet_node_status.go:76] "Successfully registered node" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:00.815541 kubelet[2374]: I0115 14:04:00.814807 2374 apiserver.go:52] "Watching apiserver" Jan 15 14:04:00.838327 kubelet[2374]: I0115 14:04:00.838269 2374 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 15 14:04:02.308321 kubelet[2374]: W0115 14:04:02.308209 2374 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 14:04:03.571078 systemd[1]: Reloading requested from client PID 2645 ('systemctl') (unit session-11.scope)... Jan 15 14:04:03.571208 systemd[1]: Reloading... Jan 15 14:04:03.717823 zram_generator::config[2690]: No configuration found. Jan 15 14:04:03.903851 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 14:04:04.032706 systemd[1]: Reloading finished in 460 ms. Jan 15 14:04:04.105469 kubelet[2374]: I0115 14:04:04.105308 2374 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 14:04:04.105618 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:04:04.119564 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 14:04:04.120114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:04:04.120242 systemd[1]: kubelet.service: Consumed 1.503s CPU time, 110.7M memory peak, 0B memory swap peak. Jan 15 14:04:04.128235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 14:04:04.404516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 14:04:04.416391 (kubelet)[2748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 14:04:04.530975 kubelet[2748]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 14:04:04.530975 kubelet[2748]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 14:04:04.530975 kubelet[2748]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 14:04:04.530975 kubelet[2748]: I0115 14:04:04.530904 2748 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 14:04:04.542608 kubelet[2748]: I0115 14:04:04.542299 2748 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 15 14:04:04.542608 kubelet[2748]: I0115 14:04:04.542397 2748 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 14:04:04.543339 kubelet[2748]: I0115 14:04:04.542684 2748 server.go:919] "Client rotation is on, will bootstrap in background" Jan 15 14:04:04.545630 kubelet[2748]: I0115 14:04:04.544971 2748 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 14:04:04.557106 kubelet[2748]: I0115 14:04:04.556856 2748 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 14:04:04.559127 sudo[2760]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 15 14:04:04.560605 sudo[2760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 15 14:04:04.576512 kubelet[2748]: I0115 14:04:04.576465 2748 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 14:04:04.577145 kubelet[2748]: I0115 14:04:04.576895 2748 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 14:04:04.577145 kubelet[2748]: I0115 14:04:04.577115 2748 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 15 14:04:04.578115 kubelet[2748]: I0115 14:04:04.577170 2748 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 14:04:04.578115 kubelet[2748]: I0115 14:04:04.577188 2748 container_manager_linux.go:301] "Creating device plugin manager" Jan 15 14:04:04.578115 kubelet[2748]: I0115 14:04:04.577269 2748 state_mem.go:36] "Initialized new in-memory state store" Jan 15 14:04:04.578115 kubelet[2748]: I0115 14:04:04.577533 2748 kubelet.go:396] "Attempting to sync node with API server" Jan 15 14:04:04.578115 kubelet[2748]: I0115 14:04:04.577564 2748 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 14:04:04.578115 kubelet[2748]: I0115 14:04:04.577636 2748 kubelet.go:312] "Adding apiserver pod source" Jan 15 14:04:04.578115 kubelet[2748]: I0115 14:04:04.577676 2748 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 14:04:04.583781 kubelet[2748]: I0115 14:04:04.583081 2748 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 14:04:04.584062 kubelet[2748]: I0115 14:04:04.584039 2748 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 14:04:04.591267 kubelet[2748]: I0115 14:04:04.591235 2748 server.go:1256] "Started kubelet" Jan 15 14:04:04.604633 kubelet[2748]: I0115 14:04:04.604596 2748 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 14:04:04.614977 kubelet[2748]: I0115 14:04:04.614387 2748 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 14:04:04.618518 kubelet[2748]: I0115 14:04:04.618471 2748 server.go:461] "Adding debug handlers to kubelet server" Jan 15 14:04:04.620585 kubelet[2748]: E0115 14:04:04.618807 2748 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 14:04:04.621104 kubelet[2748]: I0115 14:04:04.620897 2748 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 14:04:04.621319 kubelet[2748]: I0115 14:04:04.621298 2748 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 14:04:04.622628 kubelet[2748]: I0115 14:04:04.622589 2748 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 15 14:04:04.623085 kubelet[2748]: I0115 14:04:04.623065 2748 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 15 14:04:04.624267 kubelet[2748]: I0115 14:04:04.624011 2748 reconciler_new.go:29] "Reconciler: start to sync state" Jan 15 14:04:04.632250 kubelet[2748]: I0115 14:04:04.632223 2748 factory.go:221] Registration of the systemd container factory successfully Jan 15 14:04:04.633837 kubelet[2748]: I0115 14:04:04.633710 2748 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 14:04:04.643366 kubelet[2748]: I0115 14:04:04.643084 2748 factory.go:221] Registration of the containerd container factory successfully Jan 15 14:04:04.670153 kubelet[2748]: I0115 14:04:04.668359 2748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 14:04:04.672932 kubelet[2748]: I0115 14:04:04.672894 2748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 14:04:04.673024 kubelet[2748]: I0115 14:04:04.672981 2748 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 14:04:04.673024 kubelet[2748]: I0115 14:04:04.673022 2748 kubelet.go:2329] "Starting kubelet main sync loop" Jan 15 14:04:04.673148 kubelet[2748]: E0115 14:04:04.673103 2748 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 14:04:04.750798 kubelet[2748]: I0115 14:04:04.750157 2748 kubelet_node_status.go:73] "Attempting to register node" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:04.773228 kubelet[2748]: E0115 14:04:04.773187 2748 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 15 14:04:04.774460 kubelet[2748]: I0115 14:04:04.774146 2748 kubelet_node_status.go:112] "Node was previously registered" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:04.774460 kubelet[2748]: I0115 14:04:04.774259 2748 kubelet_node_status.go:76] "Successfully registered node" node="srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:04.843984 kubelet[2748]: I0115 14:04:04.843200 2748 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 14:04:04.843984 kubelet[2748]: I0115 14:04:04.843273 2748 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 14:04:04.843984 kubelet[2748]: I0115 14:04:04.843348 2748 state_mem.go:36] "Initialized new in-memory state store" Jan 15 14:04:04.843984 kubelet[2748]: I0115 14:04:04.843666 2748 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 14:04:04.843984 kubelet[2748]: I0115 14:04:04.843710 2748 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 14:04:04.843984 kubelet[2748]: I0115 14:04:04.843739 2748 policy_none.go:49] "None policy: Start" Jan 15 14:04:04.849787 kubelet[2748]: I0115 14:04:04.847475 2748 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 14:04:04.849787 kubelet[2748]: I0115 14:04:04.847567 2748 state_mem.go:35] "Initializing new in-memory state store" Jan 15 14:04:04.850079 kubelet[2748]: I0115 14:04:04.850056 2748 state_mem.go:75] "Updated machine memory state" Jan 15 14:04:04.870826 kubelet[2748]: I0115 14:04:04.870734 2748 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 14:04:04.876309 kubelet[2748]: I0115 14:04:04.876266 2748 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 14:04:04.974830 kubelet[2748]: I0115 14:04:04.974398 2748 topology_manager.go:215] "Topology Admit Handler" podUID="d00cb5a6f5756d215a10c3b277ffa4d5" podNamespace="kube-system" podName="kube-apiserver-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:04.974830 kubelet[2748]: I0115 14:04:04.974589 2748 topology_manager.go:215] "Topology Admit Handler" podUID="e0a71ab4dc37df084239c96671e950f9" podNamespace="kube-system" podName="kube-controller-manager-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:04.974830 kubelet[2748]: I0115 14:04:04.974716 2748 topology_manager.go:215] "Topology Admit Handler" podUID="9fecffe60c624bd9037a6813e8578d77" podNamespace="kube-system" podName="kube-scheduler-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:04.988640 kubelet[2748]: W0115 14:04:04.988598 2748 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 14:04:04.988824 kubelet[2748]: E0115 14:04:04.988733 2748 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-6ftsm.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:04.990008 kubelet[2748]: W0115 14:04:04.989855 2748 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 14:04:04.991177 kubelet[2748]: W0115 14:04:04.991140 2748 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 14:04:05.062321 kubelet[2748]: I0115 14:04:05.062172 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e0a71ab4dc37df084239c96671e950f9-kubeconfig\") pod \"kube-controller-manager-srv-6ftsm.gb1.brightbox.com\" (UID: \"e0a71ab4dc37df084239c96671e950f9\") " pod="kube-system/kube-controller-manager-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:05.062321 kubelet[2748]: I0115 14:04:05.062273 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0a71ab4dc37df084239c96671e950f9-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-6ftsm.gb1.brightbox.com\" (UID: \"e0a71ab4dc37df084239c96671e950f9\") " pod="kube-system/kube-controller-manager-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:05.062321 kubelet[2748]: I0115 14:04:05.062314 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0a71ab4dc37df084239c96671e950f9-ca-certs\") pod \"kube-controller-manager-srv-6ftsm.gb1.brightbox.com\" (UID: \"e0a71ab4dc37df084239c96671e950f9\") " pod="kube-system/kube-controller-manager-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:05.062321 kubelet[2748]: I0115 14:04:05.062351 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e0a71ab4dc37df084239c96671e950f9-flexvolume-dir\") pod \"kube-controller-manager-srv-6ftsm.gb1.brightbox.com\" (UID: \"e0a71ab4dc37df084239c96671e950f9\") " pod="kube-system/kube-controller-manager-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:05.062812 kubelet[2748]: I0115 14:04:05.062384 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0a71ab4dc37df084239c96671e950f9-k8s-certs\") pod \"kube-controller-manager-srv-6ftsm.gb1.brightbox.com\" (UID: \"e0a71ab4dc37df084239c96671e950f9\") " pod="kube-system/kube-controller-manager-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:05.062812 kubelet[2748]: I0115 14:04:05.062415 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9fecffe60c624bd9037a6813e8578d77-kubeconfig\") pod \"kube-scheduler-srv-6ftsm.gb1.brightbox.com\" (UID: \"9fecffe60c624bd9037a6813e8578d77\") " pod="kube-system/kube-scheduler-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:05.062812 kubelet[2748]: I0115 14:04:05.062448 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d00cb5a6f5756d215a10c3b277ffa4d5-ca-certs\") pod \"kube-apiserver-srv-6ftsm.gb1.brightbox.com\" (UID: \"d00cb5a6f5756d215a10c3b277ffa4d5\") " pod="kube-system/kube-apiserver-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:05.062812 kubelet[2748]: I0115 14:04:05.062487 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d00cb5a6f5756d215a10c3b277ffa4d5-k8s-certs\") pod \"kube-apiserver-srv-6ftsm.gb1.brightbox.com\" (UID: \"d00cb5a6f5756d215a10c3b277ffa4d5\") " pod="kube-system/kube-apiserver-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:05.062812 kubelet[2748]: I0115 14:04:05.062518 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d00cb5a6f5756d215a10c3b277ffa4d5-usr-share-ca-certificates\") pod \"kube-apiserver-srv-6ftsm.gb1.brightbox.com\" (UID: \"d00cb5a6f5756d215a10c3b277ffa4d5\") " pod="kube-system/kube-apiserver-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:05.451552 sudo[2760]: pam_unix(sudo:session): session closed for user root Jan 15 14:04:05.581451 kubelet[2748]: I0115 14:04:05.581044 2748 apiserver.go:52] "Watching apiserver" Jan 15 14:04:05.624069 kubelet[2748]: I0115 14:04:05.623985 2748 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 15 14:04:05.762757 kubelet[2748]: W0115 14:04:05.762724 2748 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 14:04:05.762981 kubelet[2748]: E0115 14:04:05.762827 2748 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-6ftsm.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-6ftsm.gb1.brightbox.com" Jan 15 14:04:05.919337 kubelet[2748]: I0115 14:04:05.919264 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-6ftsm.gb1.brightbox.com" podStartSLOduration=3.919103216 podStartE2EDuration="3.919103216s" podCreationTimestamp="2025-01-15 14:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 14:04:05.879822598 +0000 UTC m=+1.446834920" watchObservedRunningTime="2025-01-15 14:04:05.919103216 +0000 UTC m=+1.486115525" Jan 15 14:04:05.940021 kubelet[2748]: I0115 14:04:05.939696 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-6ftsm.gb1.brightbox.com" podStartSLOduration=1.93958893 podStartE2EDuration="1.93958893s" podCreationTimestamp="2025-01-15 14:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 14:04:05.936037275 +0000 UTC m=+1.503049595" watchObservedRunningTime="2025-01-15 14:04:05.93958893 +0000 UTC m=+1.506601240" Jan 15 14:04:05.940021 kubelet[2748]: I0115 14:04:05.939838 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-6ftsm.gb1.brightbox.com" podStartSLOduration=1.9398126100000002 podStartE2EDuration="1.93981261s" podCreationTimestamp="2025-01-15 14:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 14:04:05.921319913 +0000 UTC m=+1.488332234" watchObservedRunningTime="2025-01-15 14:04:05.93981261 +0000 UTC m=+1.506824939" Jan 15 14:04:07.104881 sudo[1783]: pam_unix(sudo:session): session closed for user root Jan 15 14:04:07.253141 sshd[1780]: pam_unix(sshd:session): session closed for user core Jan 15 14:04:07.258521 systemd[1]: sshd@8-10.230.66.178:22-147.75.109.163:60126.service: Deactivated successfully. Jan 15 14:04:07.261731 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 14:04:07.262052 systemd[1]: session-11.scope: Consumed 8.006s CPU time, 188.3M memory peak, 0B memory swap peak. Jan 15 14:04:07.263721 systemd-logind[1489]: Session 11 logged out. Waiting for processes to exit. Jan 15 14:04:07.266874 systemd-logind[1489]: Removed session 11. Jan 15 14:04:18.883355 kubelet[2748]: I0115 14:04:18.883147 2748 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 14:04:18.886221 containerd[1507]: time="2025-01-15T14:04:18.885323776Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 14:04:18.887401 kubelet[2748]: I0115 14:04:18.886475 2748 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 14:04:19.805122 kubelet[2748]: I0115 14:04:19.802331 2748 topology_manager.go:215] "Topology Admit Handler" podUID="fb1a3a1c-1587-485b-a714-ce8ea2754f57" podNamespace="kube-system" podName="kube-proxy-7gbc7" Jan 15 14:04:19.810747 kubelet[2748]: I0115 14:04:19.810671 2748 topology_manager.go:215] "Topology Admit Handler" podUID="68c76c10-3a34-43b0-9a91-d479185f9266" podNamespace="kube-system" podName="cilium-dfqtz" Jan 15 14:04:19.838082 systemd[1]: Created slice kubepods-besteffort-podfb1a3a1c_1587_485b_a714_ce8ea2754f57.slice - libcontainer container kubepods-besteffort-podfb1a3a1c_1587_485b_a714_ce8ea2754f57.slice. Jan 15 14:04:19.859856 systemd[1]: Created slice kubepods-burstable-pod68c76c10_3a34_43b0_9a91_d479185f9266.slice - libcontainer container kubepods-burstable-pod68c76c10_3a34_43b0_9a91_d479185f9266.slice. Jan 15 14:04:19.862220 kubelet[2748]: I0115 14:04:19.862129 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68c76c10-3a34-43b0-9a91-d479185f9266-cilium-config-path\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:19.862220 kubelet[2748]: I0115 14:04:19.862194 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-host-proc-sys-net\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:19.862475 kubelet[2748]: I0115 14:04:19.862378 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-host-proc-sys-kernel\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:19.862636 kubelet[2748]: I0115 14:04:19.862537 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68c76c10-3a34-43b0-9a91-d479185f9266-hubble-tls\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:19.862800 kubelet[2748]: I0115 14:04:19.862645 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-hostproc\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:19.864878 kubelet[2748]: I0115 14:04:19.863049 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-lib-modules\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:19.864878 kubelet[2748]: I0115 14:04:19.864174 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn8s4\" (UniqueName: \"kubernetes.io/projected/68c76c10-3a34-43b0-9a91-d479185f9266-kube-api-access-sn8s4\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:19.864878 kubelet[2748]: I0115 14:04:19.864254 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-etc-cni-netd\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:19.864878 kubelet[2748]: I0115 14:04:19.864339 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-xtables-lock\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:19.864878 kubelet[2748]: I0115 14:04:19.864400 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-bpf-maps\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:19.864878 kubelet[2748]: I0115 14:04:19.864437 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-cilium-cgroup\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:19.865444 kubelet[2748]: I0115 14:04:19.864492 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb1a3a1c-1587-485b-a714-ce8ea2754f57-xtables-lock\") pod \"kube-proxy-7gbc7\" (UID: \"fb1a3a1c-1587-485b-a714-ce8ea2754f57\") " pod="kube-system/kube-proxy-7gbc7" Jan 15 14:04:19.865444 kubelet[2748]: I0115 14:04:19.864535 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqmps\" (UniqueName: \"kubernetes.io/projected/fb1a3a1c-1587-485b-a714-ce8ea2754f57-kube-api-access-dqmps\") pod \"kube-proxy-7gbc7\" (UID: \"fb1a3a1c-1587-485b-a714-ce8ea2754f57\") " pod="kube-system/kube-proxy-7gbc7" Jan 15 14:04:19.865444 kubelet[2748]: I0115 14:04:19.864570 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-cilium-run\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:19.865444 kubelet[2748]: I0115 14:04:19.864622 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-cni-path\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:19.865444 kubelet[2748]: I0115 14:04:19.864662 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb1a3a1c-1587-485b-a714-ce8ea2754f57-kube-proxy\") pod \"kube-proxy-7gbc7\" (UID: \"fb1a3a1c-1587-485b-a714-ce8ea2754f57\") " pod="kube-system/kube-proxy-7gbc7" Jan 15 14:04:19.865444 kubelet[2748]: I0115 14:04:19.864720 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb1a3a1c-1587-485b-a714-ce8ea2754f57-lib-modules\") pod \"kube-proxy-7gbc7\" (UID: \"fb1a3a1c-1587-485b-a714-ce8ea2754f57\") " pod="kube-system/kube-proxy-7gbc7" Jan 15 14:04:19.865876 kubelet[2748]: I0115 14:04:19.864755 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68c76c10-3a34-43b0-9a91-d479185f9266-clustermesh-secrets\") pod \"cilium-dfqtz\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " pod="kube-system/cilium-dfqtz" Jan 15 14:04:20.121028 kubelet[2748]: I0115 14:04:20.118612 2748 topology_manager.go:215] "Topology Admit Handler" podUID="9eb02231-935c-4c3f-b305-68f8681090ff" podNamespace="kube-system" podName="cilium-operator-5cc964979-4nxdd" Jan 15 14:04:20.134447 systemd[1]: Created slice kubepods-besteffort-pod9eb02231_935c_4c3f_b305_68f8681090ff.slice - libcontainer container kubepods-besteffort-pod9eb02231_935c_4c3f_b305_68f8681090ff.slice. Jan 15 14:04:20.178928 containerd[1507]: time="2025-01-15T14:04:20.178827287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7gbc7,Uid:fb1a3a1c-1587-485b-a714-ce8ea2754f57,Namespace:kube-system,Attempt:0,}" Jan 15 14:04:20.181670 containerd[1507]: time="2025-01-15T14:04:20.180540368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfqtz,Uid:68c76c10-3a34-43b0-9a91-d479185f9266,Namespace:kube-system,Attempt:0,}" Jan 15 14:04:20.247071 containerd[1507]: time="2025-01-15T14:04:20.246810255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:04:20.247071 containerd[1507]: time="2025-01-15T14:04:20.246982736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:04:20.247071 containerd[1507]: time="2025-01-15T14:04:20.247024665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:04:20.249651 containerd[1507]: time="2025-01-15T14:04:20.249467691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:04:20.252926 containerd[1507]: time="2025-01-15T14:04:20.251954660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:04:20.252926 containerd[1507]: time="2025-01-15T14:04:20.252032778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:04:20.252926 containerd[1507]: time="2025-01-15T14:04:20.252056235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:04:20.252926 containerd[1507]: time="2025-01-15T14:04:20.252710098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:04:20.272229 kubelet[2748]: I0115 14:04:20.272045 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzm55\" (UniqueName: \"kubernetes.io/projected/9eb02231-935c-4c3f-b305-68f8681090ff-kube-api-access-nzm55\") pod \"cilium-operator-5cc964979-4nxdd\" (UID: \"9eb02231-935c-4c3f-b305-68f8681090ff\") " pod="kube-system/cilium-operator-5cc964979-4nxdd" Jan 15 14:04:20.272229 kubelet[2748]: I0115 14:04:20.272118 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9eb02231-935c-4c3f-b305-68f8681090ff-cilium-config-path\") pod \"cilium-operator-5cc964979-4nxdd\" (UID: \"9eb02231-935c-4c3f-b305-68f8681090ff\") " pod="kube-system/cilium-operator-5cc964979-4nxdd" Jan 15 14:04:20.297202 systemd[1]: Started cri-containerd-526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86.scope - libcontainer container 526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86. Jan 15 14:04:20.301972 systemd[1]: Started cri-containerd-5afcd982a21ca0dc7d14b3b5f61446c6462b661dcd9b1018fc654fe14bfb9c67.scope - libcontainer container 5afcd982a21ca0dc7d14b3b5f61446c6462b661dcd9b1018fc654fe14bfb9c67. Jan 15 14:04:20.356737 containerd[1507]: time="2025-01-15T14:04:20.356565236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfqtz,Uid:68c76c10-3a34-43b0-9a91-d479185f9266,Namespace:kube-system,Attempt:0,} returns sandbox id \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\"" Jan 15 14:04:20.367524 containerd[1507]: time="2025-01-15T14:04:20.367025546Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 15 14:04:20.367890 containerd[1507]: time="2025-01-15T14:04:20.367847464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7gbc7,Uid:fb1a3a1c-1587-485b-a714-ce8ea2754f57,Namespace:kube-system,Attempt:0,} returns sandbox id \"5afcd982a21ca0dc7d14b3b5f61446c6462b661dcd9b1018fc654fe14bfb9c67\"" Jan 15 14:04:20.372451 containerd[1507]: time="2025-01-15T14:04:20.371621286Z" level=info msg="CreateContainer within sandbox \"5afcd982a21ca0dc7d14b3b5f61446c6462b661dcd9b1018fc654fe14bfb9c67\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 14:04:20.407144 containerd[1507]: time="2025-01-15T14:04:20.407008362Z" level=info msg="CreateContainer within sandbox \"5afcd982a21ca0dc7d14b3b5f61446c6462b661dcd9b1018fc654fe14bfb9c67\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd96549cae17f381497276b336c7a560cd01686a8898cb3f42e53b65ea95da3d\"" Jan 15 14:04:20.409781 containerd[1507]: time="2025-01-15T14:04:20.407885789Z" level=info msg="StartContainer for \"fd96549cae17f381497276b336c7a560cd01686a8898cb3f42e53b65ea95da3d\"" Jan 15 14:04:20.442000 containerd[1507]: time="2025-01-15T14:04:20.441947085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4nxdd,Uid:9eb02231-935c-4c3f-b305-68f8681090ff,Namespace:kube-system,Attempt:0,}" Jan 15 14:04:20.444069 systemd[1]: Started cri-containerd-fd96549cae17f381497276b336c7a560cd01686a8898cb3f42e53b65ea95da3d.scope - libcontainer container fd96549cae17f381497276b336c7a560cd01686a8898cb3f42e53b65ea95da3d. Jan 15 14:04:20.503183 containerd[1507]: time="2025-01-15T14:04:20.503130281Z" level=info msg="StartContainer for \"fd96549cae17f381497276b336c7a560cd01686a8898cb3f42e53b65ea95da3d\" returns successfully" Jan 15 14:04:20.512923 containerd[1507]: time="2025-01-15T14:04:20.512571955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:04:20.512923 containerd[1507]: time="2025-01-15T14:04:20.512666044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:04:20.512923 containerd[1507]: time="2025-01-15T14:04:20.512690757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:04:20.512923 containerd[1507]: time="2025-01-15T14:04:20.512819402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:04:20.545946 systemd[1]: Started cri-containerd-0042d260c8990afd772dde167be4095c2e39b9e3b51708f9c2a9e81a85a179be.scope - libcontainer container 0042d260c8990afd772dde167be4095c2e39b9e3b51708f9c2a9e81a85a179be. Jan 15 14:04:20.650579 containerd[1507]: time="2025-01-15T14:04:20.650263404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4nxdd,Uid:9eb02231-935c-4c3f-b305-68f8681090ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"0042d260c8990afd772dde167be4095c2e39b9e3b51708f9c2a9e81a85a179be\"" Jan 15 14:04:20.828491 kubelet[2748]: I0115 14:04:20.828417 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7gbc7" podStartSLOduration=1.828310109 podStartE2EDuration="1.828310109s" podCreationTimestamp="2025-01-15 14:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 14:04:20.827825085 +0000 UTC m=+16.394837418" watchObservedRunningTime="2025-01-15 14:04:20.828310109 +0000 UTC m=+16.395322418" Jan 15 14:04:29.388739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount371303991.mount: Deactivated successfully. Jan 15 14:04:34.106822 containerd[1507]: time="2025-01-15T14:04:34.106599643Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:04:34.110334 containerd[1507]: time="2025-01-15T14:04:34.110233334Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735323" Jan 15 14:04:34.119859 containerd[1507]: time="2025-01-15T14:04:34.119733402Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:04:34.122717 containerd[1507]: time="2025-01-15T14:04:34.122670937Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.755521451s" Jan 15 14:04:34.123027 containerd[1507]: time="2025-01-15T14:04:34.122748930Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 15 14:04:34.127157 containerd[1507]: time="2025-01-15T14:04:34.126698506Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 15 14:04:34.127989 containerd[1507]: time="2025-01-15T14:04:34.127557232Z" level=info msg="CreateContainer within sandbox \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 15 14:04:34.210935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872299421.mount: Deactivated successfully. Jan 15 14:04:34.216412 containerd[1507]: time="2025-01-15T14:04:34.216361467Z" level=info msg="CreateContainer within sandbox \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4\"" Jan 15 14:04:34.217561 containerd[1507]: time="2025-01-15T14:04:34.217495092Z" level=info msg="StartContainer for \"05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4\"" Jan 15 14:04:34.469176 systemd[1]: Started cri-containerd-05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4.scope - libcontainer container 05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4. Jan 15 14:04:34.525734 containerd[1507]: time="2025-01-15T14:04:34.525615380Z" level=info msg="StartContainer for \"05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4\" returns successfully" Jan 15 14:04:34.554744 systemd[1]: cri-containerd-05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4.scope: Deactivated successfully. Jan 15 14:04:34.781489 containerd[1507]: time="2025-01-15T14:04:34.757544790Z" level=info msg="shim disconnected" id=05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4 namespace=k8s.io Jan 15 14:04:34.782206 containerd[1507]: time="2025-01-15T14:04:34.781913124Z" level=warning msg="cleaning up after shim disconnected" id=05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4 namespace=k8s.io Jan 15 14:04:34.782206 containerd[1507]: time="2025-01-15T14:04:34.781960664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:04:34.868842 containerd[1507]: time="2025-01-15T14:04:34.868790871Z" level=info msg="CreateContainer within sandbox \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 15 14:04:34.890692 containerd[1507]: time="2025-01-15T14:04:34.889986776Z" level=info msg="CreateContainer within sandbox \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0\"" Jan 15 14:04:34.896164 containerd[1507]: time="2025-01-15T14:04:34.894533312Z" level=info msg="StartContainer for \"ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0\"" Jan 15 14:04:34.950998 systemd[1]: Started cri-containerd-ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0.scope - libcontainer container ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0. Jan 15 14:04:34.992846 containerd[1507]: time="2025-01-15T14:04:34.992674466Z" level=info msg="StartContainer for \"ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0\" returns successfully" Jan 15 14:04:35.031147 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 14:04:35.032450 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 14:04:35.032631 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 15 14:04:35.043381 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 14:04:35.043797 systemd[1]: cri-containerd-ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0.scope: Deactivated successfully. Jan 15 14:04:35.121046 containerd[1507]: time="2025-01-15T14:04:35.120681971Z" level=info msg="shim disconnected" id=ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0 namespace=k8s.io Jan 15 14:04:35.121046 containerd[1507]: time="2025-01-15T14:04:35.120896649Z" level=warning msg="cleaning up after shim disconnected" id=ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0 namespace=k8s.io Jan 15 14:04:35.121046 containerd[1507]: time="2025-01-15T14:04:35.120918834Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:04:35.147081 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 14:04:35.203450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4-rootfs.mount: Deactivated successfully. Jan 15 14:04:35.875185 containerd[1507]: time="2025-01-15T14:04:35.875102808Z" level=info msg="CreateContainer within sandbox \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 15 14:04:35.937249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558011195.mount: Deactivated successfully. Jan 15 14:04:35.942564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1821654412.mount: Deactivated successfully. Jan 15 14:04:35.947151 containerd[1507]: time="2025-01-15T14:04:35.947065955Z" level=info msg="CreateContainer within sandbox \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c\"" Jan 15 14:04:35.948001 containerd[1507]: time="2025-01-15T14:04:35.947955820Z" level=info msg="StartContainer for \"332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c\"" Jan 15 14:04:35.992986 systemd[1]: Started cri-containerd-332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c.scope - libcontainer container 332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c. Jan 15 14:04:36.035745 containerd[1507]: time="2025-01-15T14:04:36.035693079Z" level=info msg="StartContainer for \"332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c\" returns successfully" Jan 15 14:04:36.043388 systemd[1]: cri-containerd-332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c.scope: Deactivated successfully. Jan 15 14:04:36.075357 containerd[1507]: time="2025-01-15T14:04:36.075266721Z" level=info msg="shim disconnected" id=332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c namespace=k8s.io Jan 15 14:04:36.075357 containerd[1507]: time="2025-01-15T14:04:36.075354328Z" level=warning msg="cleaning up after shim disconnected" id=332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c namespace=k8s.io Jan 15 14:04:36.075810 containerd[1507]: time="2025-01-15T14:04:36.075370108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:04:36.203203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c-rootfs.mount: Deactivated successfully. Jan 15 14:04:36.882025 containerd[1507]: time="2025-01-15T14:04:36.881974299Z" level=info msg="CreateContainer within sandbox \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 15 14:04:36.903252 containerd[1507]: time="2025-01-15T14:04:36.901006465Z" level=info msg="CreateContainer within sandbox \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6\"" Jan 15 14:04:36.903624 containerd[1507]: time="2025-01-15T14:04:36.903425787Z" level=info msg="StartContainer for \"09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6\"" Jan 15 14:04:36.957093 systemd[1]: Started cri-containerd-09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6.scope - libcontainer container 09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6. Jan 15 14:04:36.995948 systemd[1]: cri-containerd-09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6.scope: Deactivated successfully. Jan 15 14:04:36.998870 containerd[1507]: time="2025-01-15T14:04:36.998536379Z" level=info msg="StartContainer for \"09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6\" returns successfully" Jan 15 14:04:37.040268 containerd[1507]: time="2025-01-15T14:04:37.039960103Z" level=info msg="shim disconnected" id=09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6 namespace=k8s.io Jan 15 14:04:37.040268 containerd[1507]: time="2025-01-15T14:04:37.040052025Z" level=warning msg="cleaning up after shim disconnected" id=09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6 namespace=k8s.io Jan 15 14:04:37.040268 containerd[1507]: time="2025-01-15T14:04:37.040069269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:04:37.203341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6-rootfs.mount: Deactivated successfully. Jan 15 14:04:37.887291 containerd[1507]: time="2025-01-15T14:04:37.886911342Z" level=info msg="CreateContainer within sandbox \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 15 14:04:37.914050 containerd[1507]: time="2025-01-15T14:04:37.913891944Z" level=info msg="CreateContainer within sandbox \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\"" Jan 15 14:04:37.916648 containerd[1507]: time="2025-01-15T14:04:37.915193499Z" level=info msg="StartContainer for \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\"" Jan 15 14:04:37.968987 systemd[1]: Started cri-containerd-4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81.scope - libcontainer container 4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81. Jan 15 14:04:38.019388 containerd[1507]: time="2025-01-15T14:04:38.019049019Z" level=info msg="StartContainer for \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\" returns successfully" Jan 15 14:04:38.236163 kubelet[2748]: I0115 14:04:38.234367 2748 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 15 14:04:38.291337 kubelet[2748]: I0115 14:04:38.291251 2748 topology_manager.go:215] "Topology Admit Handler" podUID="f2ecbb8d-481a-4707-943c-c5e8f098b8b2" podNamespace="kube-system" podName="coredns-76f75df574-q82xb" Jan 15 14:04:38.300392 kubelet[2748]: I0115 14:04:38.299507 2748 topology_manager.go:215] "Topology Admit Handler" podUID="9603055e-38cb-4a42-b0c7-907fd5fc68fa" podNamespace="kube-system" podName="coredns-76f75df574-f5bwt" Jan 15 14:04:38.317561 systemd[1]: Created slice kubepods-burstable-podf2ecbb8d_481a_4707_943c_c5e8f098b8b2.slice - libcontainer container kubepods-burstable-podf2ecbb8d_481a_4707_943c_c5e8f098b8b2.slice. Jan 15 14:04:38.329300 systemd[1]: Created slice kubepods-burstable-pod9603055e_38cb_4a42_b0c7_907fd5fc68fa.slice - libcontainer container kubepods-burstable-pod9603055e_38cb_4a42_b0c7_907fd5fc68fa.slice. Jan 15 14:04:38.446733 kubelet[2748]: I0115 14:04:38.446598 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr5xt\" (UniqueName: \"kubernetes.io/projected/f2ecbb8d-481a-4707-943c-c5e8f098b8b2-kube-api-access-sr5xt\") pod \"coredns-76f75df574-q82xb\" (UID: \"f2ecbb8d-481a-4707-943c-c5e8f098b8b2\") " pod="kube-system/coredns-76f75df574-q82xb" Jan 15 14:04:38.446733 kubelet[2748]: I0115 14:04:38.446686 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9603055e-38cb-4a42-b0c7-907fd5fc68fa-config-volume\") pod \"coredns-76f75df574-f5bwt\" (UID: \"9603055e-38cb-4a42-b0c7-907fd5fc68fa\") " pod="kube-system/coredns-76f75df574-f5bwt" Jan 15 14:04:38.447108 kubelet[2748]: I0115 14:04:38.446903 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwtmz\" (UniqueName: \"kubernetes.io/projected/9603055e-38cb-4a42-b0c7-907fd5fc68fa-kube-api-access-cwtmz\") pod \"coredns-76f75df574-f5bwt\" (UID: \"9603055e-38cb-4a42-b0c7-907fd5fc68fa\") " pod="kube-system/coredns-76f75df574-f5bwt" Jan 15 14:04:38.447108 kubelet[2748]: I0115 14:04:38.446954 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2ecbb8d-481a-4707-943c-c5e8f098b8b2-config-volume\") pod \"coredns-76f75df574-q82xb\" (UID: \"f2ecbb8d-481a-4707-943c-c5e8f098b8b2\") " pod="kube-system/coredns-76f75df574-q82xb" Jan 15 14:04:38.627880 containerd[1507]: time="2025-01-15T14:04:38.627704942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q82xb,Uid:f2ecbb8d-481a-4707-943c-c5e8f098b8b2,Namespace:kube-system,Attempt:0,}" Jan 15 14:04:38.644744 containerd[1507]: time="2025-01-15T14:04:38.644657488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f5bwt,Uid:9603055e-38cb-4a42-b0c7-907fd5fc68fa,Namespace:kube-system,Attempt:0,}" Jan 15 14:04:38.926057 kubelet[2748]: I0115 14:04:38.925035 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dfqtz" podStartSLOduration=6.161052501 podStartE2EDuration="19.92482249s" podCreationTimestamp="2025-01-15 14:04:19 +0000 UTC" firstStartedPulling="2025-01-15 14:04:20.360861395 +0000 UTC m=+15.927873702" lastFinishedPulling="2025-01-15 14:04:34.124631375 +0000 UTC m=+29.691643691" observedRunningTime="2025-01-15 14:04:38.923599334 +0000 UTC m=+34.490611662" watchObservedRunningTime="2025-01-15 14:04:38.92482249 +0000 UTC m=+34.491834828" Jan 15 14:04:47.138337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174917433.mount: Deactivated successfully. Jan 15 14:04:55.193646 containerd[1507]: time="2025-01-15T14:04:55.193136327Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:04:55.198094 containerd[1507]: time="2025-01-15T14:04:55.193678256Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907221" Jan 15 14:04:55.200137 containerd[1507]: time="2025-01-15T14:04:55.199994654Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 14:04:55.203180 containerd[1507]: time="2025-01-15T14:04:55.203046450Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 21.076282723s" Jan 15 14:04:55.203180 containerd[1507]: time="2025-01-15T14:04:55.203124109Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 15 14:04:55.212179 containerd[1507]: time="2025-01-15T14:04:55.212139820Z" level=info msg="CreateContainer within sandbox \"0042d260c8990afd772dde167be4095c2e39b9e3b51708f9c2a9e81a85a179be\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 15 14:04:55.244308 containerd[1507]: time="2025-01-15T14:04:55.244199307Z" level=info msg="CreateContainer within sandbox \"0042d260c8990afd772dde167be4095c2e39b9e3b51708f9c2a9e81a85a179be\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\"" Jan 15 14:04:55.245489 containerd[1507]: time="2025-01-15T14:04:55.245435609Z" level=info msg="StartContainer for \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\"" Jan 15 14:04:55.307307 systemd[1]: run-containerd-runc-k8s.io-d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c-runc.lEzbfX.mount: Deactivated successfully. Jan 15 14:04:55.319069 systemd[1]: Started cri-containerd-d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c.scope - libcontainer container d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c. Jan 15 14:04:55.369328 containerd[1507]: time="2025-01-15T14:04:55.369260764Z" level=info msg="StartContainer for \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\" returns successfully" Jan 15 14:04:58.809547 systemd-networkd[1423]: cilium_host: Link UP Jan 15 14:04:58.811881 systemd-networkd[1423]: cilium_net: Link UP Jan 15 14:04:58.812297 systemd-networkd[1423]: cilium_net: Gained carrier Jan 15 14:04:58.812587 systemd-networkd[1423]: cilium_host: Gained carrier Jan 15 14:04:59.007856 systemd-networkd[1423]: cilium_vxlan: Link UP Jan 15 14:04:59.007889 systemd-networkd[1423]: cilium_vxlan: Gained carrier Jan 15 14:04:59.158115 systemd-networkd[1423]: cilium_host: Gained IPv6LL Jan 15 14:04:59.294104 systemd-networkd[1423]: cilium_net: Gained IPv6LL Jan 15 14:04:59.674086 kernel: NET: Registered PF_ALG protocol family Jan 15 14:05:00.808281 systemd-networkd[1423]: lxc_health: Link UP Jan 15 14:05:00.821170 systemd-networkd[1423]: lxc_health: Gained carrier Jan 15 14:05:00.966094 systemd-networkd[1423]: cilium_vxlan: Gained IPv6LL Jan 15 14:05:01.375601 systemd-networkd[1423]: lxc59eff5aafa47: Link UP Jan 15 14:05:01.390052 kernel: eth0: renamed from tmp45f69 Jan 15 14:05:01.472485 kernel: eth0: renamed from tmp4b576 Jan 15 14:05:01.480440 systemd-networkd[1423]: lxc59eff5aafa47: Gained carrier Jan 15 14:05:01.480742 systemd-networkd[1423]: lxcde7b7613ab65: Link UP Jan 15 14:05:01.485413 systemd-networkd[1423]: lxcde7b7613ab65: Gained carrier Jan 15 14:05:02.256976 kubelet[2748]: I0115 14:05:02.256799 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-4nxdd" podStartSLOduration=7.665741098 podStartE2EDuration="42.216352652s" podCreationTimestamp="2025-01-15 14:04:20 +0000 UTC" firstStartedPulling="2025-01-15 14:04:20.653945914 +0000 UTC m=+16.220958216" lastFinishedPulling="2025-01-15 14:04:55.204557453 +0000 UTC m=+50.771569770" observedRunningTime="2025-01-15 14:04:56.042123427 +0000 UTC m=+51.609135743" watchObservedRunningTime="2025-01-15 14:05:02.216352652 +0000 UTC m=+57.783364961" Jan 15 14:05:02.502025 systemd-networkd[1423]: lxc_health: Gained IPv6LL Jan 15 14:05:02.951105 systemd-networkd[1423]: lxcde7b7613ab65: Gained IPv6LL Jan 15 14:05:03.462078 systemd-networkd[1423]: lxc59eff5aafa47: Gained IPv6LL Jan 15 14:05:07.296349 containerd[1507]: time="2025-01-15T14:05:07.293252015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:05:07.296349 containerd[1507]: time="2025-01-15T14:05:07.295964724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:05:07.296349 containerd[1507]: time="2025-01-15T14:05:07.296012802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:05:07.298700 containerd[1507]: time="2025-01-15T14:05:07.296235645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:05:07.361975 systemd[1]: run-containerd-runc-k8s.io-45f69a0dfd908858fe07c9a1613fa688f6695b03f5128f0ffcd4e44e22960099-runc.XAcpAe.mount: Deactivated successfully. Jan 15 14:05:07.402863 systemd[1]: Started cri-containerd-45f69a0dfd908858fe07c9a1613fa688f6695b03f5128f0ffcd4e44e22960099.scope - libcontainer container 45f69a0dfd908858fe07c9a1613fa688f6695b03f5128f0ffcd4e44e22960099. Jan 15 14:05:07.453623 containerd[1507]: time="2025-01-15T14:05:07.452296317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:05:07.453623 containerd[1507]: time="2025-01-15T14:05:07.452389672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:05:07.453623 containerd[1507]: time="2025-01-15T14:05:07.452413422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:05:07.453623 containerd[1507]: time="2025-01-15T14:05:07.452835564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:05:07.504923 systemd[1]: Started cri-containerd-4b576de67d5a92e29536fd48019b8be9c912ae13e7ef9566b2c8d7f83a7a6180.scope - libcontainer container 4b576de67d5a92e29536fd48019b8be9c912ae13e7ef9566b2c8d7f83a7a6180. Jan 15 14:05:07.589237 containerd[1507]: time="2025-01-15T14:05:07.588924303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f5bwt,Uid:9603055e-38cb-4a42-b0c7-907fd5fc68fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"45f69a0dfd908858fe07c9a1613fa688f6695b03f5128f0ffcd4e44e22960099\"" Jan 15 14:05:07.606004 containerd[1507]: time="2025-01-15T14:05:07.605670649Z" level=info msg="CreateContainer within sandbox \"45f69a0dfd908858fe07c9a1613fa688f6695b03f5128f0ffcd4e44e22960099\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 14:05:07.636332 containerd[1507]: time="2025-01-15T14:05:07.635961381Z" level=info msg="CreateContainer within sandbox \"45f69a0dfd908858fe07c9a1613fa688f6695b03f5128f0ffcd4e44e22960099\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba3fdfbffcd49da91aae7affd38af142868aad1024f1c164bc4faa682eedfdc2\"" Jan 15 14:05:07.638269 containerd[1507]: time="2025-01-15T14:05:07.637430402Z" level=info msg="StartContainer for \"ba3fdfbffcd49da91aae7affd38af142868aad1024f1c164bc4faa682eedfdc2\"" Jan 15 14:05:07.658135 containerd[1507]: time="2025-01-15T14:05:07.657966292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q82xb,Uid:f2ecbb8d-481a-4707-943c-c5e8f098b8b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b576de67d5a92e29536fd48019b8be9c912ae13e7ef9566b2c8d7f83a7a6180\"" Jan 15 14:05:07.662568 containerd[1507]: time="2025-01-15T14:05:07.662444300Z" level=info msg="CreateContainer within sandbox \"4b576de67d5a92e29536fd48019b8be9c912ae13e7ef9566b2c8d7f83a7a6180\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 14:05:07.680466 containerd[1507]: time="2025-01-15T14:05:07.680323979Z" level=info msg="CreateContainer within sandbox \"4b576de67d5a92e29536fd48019b8be9c912ae13e7ef9566b2c8d7f83a7a6180\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76c6e4e2b8d0cc907189b71ad635ddba071f592e4a7b3f5cde02aeff3ac36f87\"" Jan 15 14:05:07.683087 containerd[1507]: time="2025-01-15T14:05:07.681583337Z" level=info msg="StartContainer for \"76c6e4e2b8d0cc907189b71ad635ddba071f592e4a7b3f5cde02aeff3ac36f87\"" Jan 15 14:05:07.698557 systemd[1]: Started cri-containerd-ba3fdfbffcd49da91aae7affd38af142868aad1024f1c164bc4faa682eedfdc2.scope - libcontainer container ba3fdfbffcd49da91aae7affd38af142868aad1024f1c164bc4faa682eedfdc2. Jan 15 14:05:07.746261 systemd[1]: Started cri-containerd-76c6e4e2b8d0cc907189b71ad635ddba071f592e4a7b3f5cde02aeff3ac36f87.scope - libcontainer container 76c6e4e2b8d0cc907189b71ad635ddba071f592e4a7b3f5cde02aeff3ac36f87. Jan 15 14:05:07.768124 containerd[1507]: time="2025-01-15T14:05:07.767954524Z" level=info msg="StartContainer for \"ba3fdfbffcd49da91aae7affd38af142868aad1024f1c164bc4faa682eedfdc2\" returns successfully" Jan 15 14:05:07.799642 containerd[1507]: time="2025-01-15T14:05:07.799580939Z" level=info msg="StartContainer for \"76c6e4e2b8d0cc907189b71ad635ddba071f592e4a7b3f5cde02aeff3ac36f87\" returns successfully" Jan 15 14:05:08.105401 kubelet[2748]: I0115 14:05:08.105316 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-f5bwt" podStartSLOduration=49.105122669 podStartE2EDuration="49.105122669s" podCreationTimestamp="2025-01-15 14:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 14:05:08.059048607 +0000 UTC m=+63.626060930" watchObservedRunningTime="2025-01-15 14:05:08.105122669 +0000 UTC m=+63.672134991" Jan 15 14:05:08.107283 kubelet[2748]: I0115 14:05:08.105883 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-q82xb" podStartSLOduration=48.105856237 podStartE2EDuration="48.105856237s" podCreationTimestamp="2025-01-15 14:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 14:05:08.10146661 +0000 UTC m=+63.668478920" watchObservedRunningTime="2025-01-15 14:05:08.105856237 +0000 UTC m=+63.672868553" Jan 15 14:05:16.849245 systemd[1]: Started sshd@9-10.230.66.178:22-147.75.109.163:42878.service - OpenSSH per-connection server daemon (147.75.109.163:42878). Jan 15 14:05:17.787267 sshd[4128]: Accepted publickey for core from 147.75.109.163 port 42878 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:05:17.791159 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:05:17.801193 systemd-logind[1489]: New session 12 of user core. Jan 15 14:05:17.806023 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 14:05:18.957736 sshd[4128]: pam_unix(sshd:session): session closed for user core Jan 15 14:05:18.964797 systemd[1]: sshd@9-10.230.66.178:22-147.75.109.163:42878.service: Deactivated successfully. Jan 15 14:05:18.967841 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 14:05:18.970685 systemd-logind[1489]: Session 12 logged out. Waiting for processes to exit. Jan 15 14:05:18.972694 systemd-logind[1489]: Removed session 12. Jan 15 14:05:24.125326 systemd[1]: Started sshd@10-10.230.66.178:22-147.75.109.163:34602.service - OpenSSH per-connection server daemon (147.75.109.163:34602). Jan 15 14:05:25.023553 sshd[4146]: Accepted publickey for core from 147.75.109.163 port 34602 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:05:25.026350 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:05:25.035376 systemd-logind[1489]: New session 13 of user core. Jan 15 14:05:25.042002 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 14:05:25.775503 sshd[4146]: pam_unix(sshd:session): session closed for user core Jan 15 14:05:25.782006 systemd[1]: sshd@10-10.230.66.178:22-147.75.109.163:34602.service: Deactivated successfully. Jan 15 14:05:25.786198 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 14:05:25.789291 systemd-logind[1489]: Session 13 logged out. Waiting for processes to exit. Jan 15 14:05:25.791430 systemd-logind[1489]: Removed session 13. Jan 15 14:05:30.940203 systemd[1]: Started sshd@11-10.230.66.178:22-147.75.109.163:34254.service - OpenSSH per-connection server daemon (147.75.109.163:34254). Jan 15 14:05:31.844888 sshd[4160]: Accepted publickey for core from 147.75.109.163 port 34254 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:05:31.847254 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:05:31.855930 systemd-logind[1489]: New session 14 of user core. Jan 15 14:05:31.865022 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 14:05:32.580738 sshd[4160]: pam_unix(sshd:session): session closed for user core Jan 15 14:05:32.589362 systemd[1]: sshd@11-10.230.66.178:22-147.75.109.163:34254.service: Deactivated successfully. Jan 15 14:05:32.592702 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 14:05:32.594204 systemd-logind[1489]: Session 14 logged out. Waiting for processes to exit. Jan 15 14:05:32.596516 systemd-logind[1489]: Removed session 14. Jan 15 14:05:37.749304 systemd[1]: Started sshd@12-10.230.66.178:22-147.75.109.163:54170.service - OpenSSH per-connection server daemon (147.75.109.163:54170). Jan 15 14:05:38.654679 sshd[4175]: Accepted publickey for core from 147.75.109.163 port 54170 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:05:38.657417 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:05:38.667028 systemd-logind[1489]: New session 15 of user core. Jan 15 14:05:38.675623 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 14:05:39.386178 sshd[4175]: pam_unix(sshd:session): session closed for user core Jan 15 14:05:39.390035 systemd-logind[1489]: Session 15 logged out. Waiting for processes to exit. Jan 15 14:05:39.391477 systemd[1]: sshd@12-10.230.66.178:22-147.75.109.163:54170.service: Deactivated successfully. Jan 15 14:05:39.394564 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 14:05:39.396699 systemd-logind[1489]: Removed session 15. Jan 15 14:05:39.544567 systemd[1]: Started sshd@13-10.230.66.178:22-147.75.109.163:54172.service - OpenSSH per-connection server daemon (147.75.109.163:54172). Jan 15 14:05:40.445407 sshd[4189]: Accepted publickey for core from 147.75.109.163 port 54172 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:05:40.448141 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:05:40.461889 systemd-logind[1489]: New session 16 of user core. Jan 15 14:05:40.465049 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 14:05:41.238166 sshd[4189]: pam_unix(sshd:session): session closed for user core Jan 15 14:05:41.242855 systemd[1]: sshd@13-10.230.66.178:22-147.75.109.163:54172.service: Deactivated successfully. Jan 15 14:05:41.246836 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 14:05:41.249246 systemd-logind[1489]: Session 16 logged out. Waiting for processes to exit. Jan 15 14:05:41.250793 systemd-logind[1489]: Removed session 16. Jan 15 14:05:41.400835 systemd[1]: Started sshd@14-10.230.66.178:22-147.75.109.163:54184.service - OpenSSH per-connection server daemon (147.75.109.163:54184). Jan 15 14:05:42.300635 sshd[4200]: Accepted publickey for core from 147.75.109.163 port 54184 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:05:42.303085 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:05:42.311383 systemd-logind[1489]: New session 17 of user core. Jan 15 14:05:42.319805 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 14:05:43.020107 sshd[4200]: pam_unix(sshd:session): session closed for user core Jan 15 14:05:43.027500 systemd[1]: sshd@14-10.230.66.178:22-147.75.109.163:54184.service: Deactivated successfully. Jan 15 14:05:43.030535 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 14:05:43.031605 systemd-logind[1489]: Session 17 logged out. Waiting for processes to exit. Jan 15 14:05:43.033825 systemd-logind[1489]: Removed session 17. Jan 15 14:05:48.180325 systemd[1]: Started sshd@15-10.230.66.178:22-147.75.109.163:36568.service - OpenSSH per-connection server daemon (147.75.109.163:36568). Jan 15 14:05:49.075571 sshd[4214]: Accepted publickey for core from 147.75.109.163 port 36568 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:05:49.077863 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:05:49.085702 systemd-logind[1489]: New session 18 of user core. Jan 15 14:05:49.095978 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 14:05:49.789877 sshd[4214]: pam_unix(sshd:session): session closed for user core Jan 15 14:05:49.795358 systemd[1]: sshd@15-10.230.66.178:22-147.75.109.163:36568.service: Deactivated successfully. Jan 15 14:05:49.796207 systemd-logind[1489]: Session 18 logged out. Waiting for processes to exit. Jan 15 14:05:49.800979 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 14:05:49.803376 systemd-logind[1489]: Removed session 18. Jan 15 14:05:54.950199 systemd[1]: Started sshd@16-10.230.66.178:22-147.75.109.163:36578.service - OpenSSH per-connection server daemon (147.75.109.163:36578). Jan 15 14:05:55.845313 sshd[4229]: Accepted publickey for core from 147.75.109.163 port 36578 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:05:55.847580 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:05:55.857614 systemd-logind[1489]: New session 19 of user core. Jan 15 14:05:55.867096 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 14:05:56.566949 sshd[4229]: pam_unix(sshd:session): session closed for user core Jan 15 14:05:56.571690 systemd-logind[1489]: Session 19 logged out. Waiting for processes to exit. Jan 15 14:05:56.573710 systemd[1]: sshd@16-10.230.66.178:22-147.75.109.163:36578.service: Deactivated successfully. Jan 15 14:05:56.576066 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 14:05:56.577693 systemd-logind[1489]: Removed session 19. Jan 15 14:06:01.730411 systemd[1]: Started sshd@17-10.230.66.178:22-147.75.109.163:50384.service - OpenSSH per-connection server daemon (147.75.109.163:50384). Jan 15 14:06:02.634548 sshd[4243]: Accepted publickey for core from 147.75.109.163 port 50384 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:06:02.637039 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:06:02.647263 systemd-logind[1489]: New session 20 of user core. Jan 15 14:06:02.657138 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 14:06:03.349282 sshd[4243]: pam_unix(sshd:session): session closed for user core Jan 15 14:06:03.355051 systemd[1]: sshd@17-10.230.66.178:22-147.75.109.163:50384.service: Deactivated successfully. Jan 15 14:06:03.358979 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 14:06:03.361129 systemd-logind[1489]: Session 20 logged out. Waiting for processes to exit. Jan 15 14:06:03.363072 systemd-logind[1489]: Removed session 20. Jan 15 14:06:03.507336 systemd[1]: Started sshd@18-10.230.66.178:22-147.75.109.163:50398.service - OpenSSH per-connection server daemon (147.75.109.163:50398). Jan 15 14:06:04.397277 sshd[4257]: Accepted publickey for core from 147.75.109.163 port 50398 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:06:04.401251 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:06:04.413003 systemd-logind[1489]: New session 21 of user core. Jan 15 14:06:04.421073 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 14:06:05.486550 sshd[4257]: pam_unix(sshd:session): session closed for user core Jan 15 14:06:05.494963 systemd-logind[1489]: Session 21 logged out. Waiting for processes to exit. Jan 15 14:06:05.495649 systemd[1]: sshd@18-10.230.66.178:22-147.75.109.163:50398.service: Deactivated successfully. Jan 15 14:06:05.499740 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 14:06:05.503160 systemd-logind[1489]: Removed session 21. Jan 15 14:06:05.648183 systemd[1]: Started sshd@19-10.230.66.178:22-147.75.109.163:50410.service - OpenSSH per-connection server daemon (147.75.109.163:50410). Jan 15 14:06:06.550574 sshd[4271]: Accepted publickey for core from 147.75.109.163 port 50410 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:06:06.555947 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:06:06.569668 systemd-logind[1489]: New session 22 of user core. Jan 15 14:06:06.576020 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 14:06:09.535929 sshd[4271]: pam_unix(sshd:session): session closed for user core Jan 15 14:06:09.542131 systemd[1]: sshd@19-10.230.66.178:22-147.75.109.163:50410.service: Deactivated successfully. Jan 15 14:06:09.547138 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 14:06:09.551141 systemd-logind[1489]: Session 22 logged out. Waiting for processes to exit. Jan 15 14:06:09.553923 systemd-logind[1489]: Removed session 22. Jan 15 14:06:09.702467 systemd[1]: Started sshd@20-10.230.66.178:22-147.75.109.163:41304.service - OpenSSH per-connection server daemon (147.75.109.163:41304). Jan 15 14:06:10.596898 sshd[4289]: Accepted publickey for core from 147.75.109.163 port 41304 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:06:10.600062 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:06:10.608011 systemd-logind[1489]: New session 23 of user core. Jan 15 14:06:10.617112 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 14:06:11.575132 sshd[4289]: pam_unix(sshd:session): session closed for user core Jan 15 14:06:11.582741 systemd[1]: sshd@20-10.230.66.178:22-147.75.109.163:41304.service: Deactivated successfully. Jan 15 14:06:11.586226 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 14:06:11.590032 systemd-logind[1489]: Session 23 logged out. Waiting for processes to exit. Jan 15 14:06:11.592908 systemd-logind[1489]: Removed session 23. Jan 15 14:06:11.736175 systemd[1]: Started sshd@21-10.230.66.178:22-147.75.109.163:41316.service - OpenSSH per-connection server daemon (147.75.109.163:41316). Jan 15 14:06:12.626579 sshd[4300]: Accepted publickey for core from 147.75.109.163 port 41316 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:06:12.629526 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:06:12.638670 systemd-logind[1489]: New session 24 of user core. Jan 15 14:06:12.647104 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 14:06:13.331097 sshd[4300]: pam_unix(sshd:session): session closed for user core Jan 15 14:06:13.336456 systemd[1]: sshd@21-10.230.66.178:22-147.75.109.163:41316.service: Deactivated successfully. Jan 15 14:06:13.339125 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 14:06:13.342003 systemd-logind[1489]: Session 24 logged out. Waiting for processes to exit. Jan 15 14:06:13.343319 systemd-logind[1489]: Removed session 24. Jan 15 14:06:18.495746 systemd[1]: Started sshd@22-10.230.66.178:22-147.75.109.163:41494.service - OpenSSH per-connection server daemon (147.75.109.163:41494). Jan 15 14:06:19.390352 sshd[4316]: Accepted publickey for core from 147.75.109.163 port 41494 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:06:19.393618 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:06:19.402028 systemd-logind[1489]: New session 25 of user core. Jan 15 14:06:19.413029 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 14:06:20.114271 sshd[4316]: pam_unix(sshd:session): session closed for user core Jan 15 14:06:20.120111 systemd[1]: sshd@22-10.230.66.178:22-147.75.109.163:41494.service: Deactivated successfully. Jan 15 14:06:20.124254 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 14:06:20.125839 systemd-logind[1489]: Session 25 logged out. Waiting for processes to exit. Jan 15 14:06:20.127234 systemd-logind[1489]: Removed session 25. Jan 15 14:06:25.281207 systemd[1]: Started sshd@23-10.230.66.178:22-147.75.109.163:41498.service - OpenSSH per-connection server daemon (147.75.109.163:41498). Jan 15 14:06:26.181690 sshd[4331]: Accepted publickey for core from 147.75.109.163 port 41498 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:06:26.184719 sshd[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:06:26.194897 systemd-logind[1489]: New session 26 of user core. Jan 15 14:06:26.200035 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 14:06:26.892302 sshd[4331]: pam_unix(sshd:session): session closed for user core Jan 15 14:06:26.900071 systemd[1]: sshd@23-10.230.66.178:22-147.75.109.163:41498.service: Deactivated successfully. Jan 15 14:06:26.903726 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 14:06:26.905846 systemd-logind[1489]: Session 26 logged out. Waiting for processes to exit. Jan 15 14:06:26.907598 systemd-logind[1489]: Removed session 26. Jan 15 14:06:32.059318 systemd[1]: Started sshd@24-10.230.66.178:22-147.75.109.163:45842.service - OpenSSH per-connection server daemon (147.75.109.163:45842). Jan 15 14:06:32.958709 sshd[4344]: Accepted publickey for core from 147.75.109.163 port 45842 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:06:32.961076 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:06:32.968525 systemd-logind[1489]: New session 27 of user core. Jan 15 14:06:32.975980 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 15 14:06:33.661553 sshd[4344]: pam_unix(sshd:session): session closed for user core Jan 15 14:06:33.668800 systemd[1]: sshd@24-10.230.66.178:22-147.75.109.163:45842.service: Deactivated successfully. Jan 15 14:06:33.673007 systemd[1]: session-27.scope: Deactivated successfully. Jan 15 14:06:33.676219 systemd-logind[1489]: Session 27 logged out. Waiting for processes to exit. Jan 15 14:06:33.678541 systemd-logind[1489]: Removed session 27. Jan 15 14:06:33.827221 systemd[1]: Started sshd@25-10.230.66.178:22-147.75.109.163:45844.service - OpenSSH per-connection server daemon (147.75.109.163:45844). Jan 15 14:06:34.715082 sshd[4357]: Accepted publickey for core from 147.75.109.163 port 45844 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:06:34.717579 sshd[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:06:34.724689 systemd-logind[1489]: New session 28 of user core. Jan 15 14:06:34.736010 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 15 14:06:36.720906 containerd[1507]: time="2025-01-15T14:06:36.719966867Z" level=info msg="StopContainer for \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\" with timeout 30 (s)" Jan 15 14:06:36.726789 containerd[1507]: time="2025-01-15T14:06:36.724726522Z" level=info msg="Stop container \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\" with signal terminated" Jan 15 14:06:36.789030 systemd[1]: cri-containerd-d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c.scope: Deactivated successfully. Jan 15 14:06:36.861645 containerd[1507]: time="2025-01-15T14:06:36.860118635Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 14:06:36.860478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c-rootfs.mount: Deactivated successfully. Jan 15 14:06:36.865343 containerd[1507]: time="2025-01-15T14:06:36.865054140Z" level=info msg="StopContainer for \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\" with timeout 2 (s)" Jan 15 14:06:36.866111 containerd[1507]: time="2025-01-15T14:06:36.866064894Z" level=info msg="Stop container \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\" with signal terminated" Jan 15 14:06:36.874412 containerd[1507]: time="2025-01-15T14:06:36.874149844Z" level=info msg="shim disconnected" id=d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c namespace=k8s.io Jan 15 14:06:36.875130 containerd[1507]: time="2025-01-15T14:06:36.874895567Z" level=warning msg="cleaning up after shim disconnected" id=d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c namespace=k8s.io Jan 15 14:06:36.875130 containerd[1507]: time="2025-01-15T14:06:36.874933130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:06:36.891119 systemd-networkd[1423]: lxc_health: Link DOWN Jan 15 14:06:36.891145 systemd-networkd[1423]: lxc_health: Lost carrier Jan 15 14:06:36.921462 systemd[1]: cri-containerd-4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81.scope: Deactivated successfully. Jan 15 14:06:36.922050 systemd[1]: cri-containerd-4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81.scope: Consumed 10.805s CPU time. Jan 15 14:06:36.955948 containerd[1507]: time="2025-01-15T14:06:36.955489615Z" level=info msg="StopContainer for \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\" returns successfully" Jan 15 14:06:36.958557 containerd[1507]: time="2025-01-15T14:06:36.958505005Z" level=info msg="StopPodSandbox for \"0042d260c8990afd772dde167be4095c2e39b9e3b51708f9c2a9e81a85a179be\"" Jan 15 14:06:36.958861 containerd[1507]: time="2025-01-15T14:06:36.958747040Z" level=info msg="Container to stop \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 14:06:36.963480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0042d260c8990afd772dde167be4095c2e39b9e3b51708f9c2a9e81a85a179be-shm.mount: Deactivated successfully. Jan 15 14:06:36.980270 systemd[1]: cri-containerd-0042d260c8990afd772dde167be4095c2e39b9e3b51708f9c2a9e81a85a179be.scope: Deactivated successfully. Jan 15 14:06:36.989467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81-rootfs.mount: Deactivated successfully. Jan 15 14:06:37.002851 containerd[1507]: time="2025-01-15T14:06:37.002436070Z" level=info msg="shim disconnected" id=4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81 namespace=k8s.io Jan 15 14:06:37.002851 containerd[1507]: time="2025-01-15T14:06:37.002534195Z" level=warning msg="cleaning up after shim disconnected" id=4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81 namespace=k8s.io Jan 15 14:06:37.002851 containerd[1507]: time="2025-01-15T14:06:37.002588514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:06:37.031336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0042d260c8990afd772dde167be4095c2e39b9e3b51708f9c2a9e81a85a179be-rootfs.mount: Deactivated successfully. Jan 15 14:06:37.037657 containerd[1507]: time="2025-01-15T14:06:37.037305998Z" level=info msg="shim disconnected" id=0042d260c8990afd772dde167be4095c2e39b9e3b51708f9c2a9e81a85a179be namespace=k8s.io Jan 15 14:06:37.037657 containerd[1507]: time="2025-01-15T14:06:37.037398514Z" level=warning msg="cleaning up after shim disconnected" id=0042d260c8990afd772dde167be4095c2e39b9e3b51708f9c2a9e81a85a179be namespace=k8s.io Jan 15 14:06:37.037657 containerd[1507]: time="2025-01-15T14:06:37.037415451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:06:37.052564 containerd[1507]: time="2025-01-15T14:06:37.052064235Z" level=info msg="StopContainer for \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\" returns successfully" Jan 15 14:06:37.054340 containerd[1507]: time="2025-01-15T14:06:37.053942585Z" level=info msg="StopPodSandbox for \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\"" Jan 15 14:06:37.054340 containerd[1507]: time="2025-01-15T14:06:37.054016664Z" level=info msg="Container to stop \"ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 14:06:37.054340 containerd[1507]: time="2025-01-15T14:06:37.054053511Z" level=info msg="Container to stop \"332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 14:06:37.054340 containerd[1507]: time="2025-01-15T14:06:37.054080058Z" level=info msg="Container to stop \"09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 14:06:37.054340 containerd[1507]: time="2025-01-15T14:06:37.054097168Z" level=info msg="Container to stop \"05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 14:06:37.054340 containerd[1507]: time="2025-01-15T14:06:37.054113304Z" level=info msg="Container to stop \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 15 14:06:37.067720 systemd[1]: cri-containerd-526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86.scope: Deactivated successfully. Jan 15 14:06:37.094200 containerd[1507]: time="2025-01-15T14:06:37.093845577Z" level=info msg="TearDown network for sandbox \"0042d260c8990afd772dde167be4095c2e39b9e3b51708f9c2a9e81a85a179be\" successfully" Jan 15 14:06:37.094200 containerd[1507]: time="2025-01-15T14:06:37.093911277Z" level=info msg="StopPodSandbox for \"0042d260c8990afd772dde167be4095c2e39b9e3b51708f9c2a9e81a85a179be\" returns successfully" Jan 15 14:06:37.118184 containerd[1507]: time="2025-01-15T14:06:37.117823200Z" level=info msg="shim disconnected" id=526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86 namespace=k8s.io Jan 15 14:06:37.118184 containerd[1507]: time="2025-01-15T14:06:37.118025580Z" level=warning msg="cleaning up after shim disconnected" id=526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86 namespace=k8s.io Jan 15 14:06:37.118184 containerd[1507]: time="2025-01-15T14:06:37.118044273Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:06:37.151412 containerd[1507]: time="2025-01-15T14:06:37.151252468Z" level=info msg="TearDown network for sandbox \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\" successfully" Jan 15 14:06:37.151412 containerd[1507]: time="2025-01-15T14:06:37.151326717Z" level=info msg="StopPodSandbox for \"526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86\" returns successfully" Jan 15 14:06:37.246965 kubelet[2748]: I0115 14:06:37.243947 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9eb02231-935c-4c3f-b305-68f8681090ff-cilium-config-path\") pod \"9eb02231-935c-4c3f-b305-68f8681090ff\" (UID: \"9eb02231-935c-4c3f-b305-68f8681090ff\") " Jan 15 14:06:37.246965 kubelet[2748]: I0115 14:06:37.245848 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzm55\" (UniqueName: \"kubernetes.io/projected/9eb02231-935c-4c3f-b305-68f8681090ff-kube-api-access-nzm55\") pod \"9eb02231-935c-4c3f-b305-68f8681090ff\" (UID: \"9eb02231-935c-4c3f-b305-68f8681090ff\") " Jan 15 14:06:37.256409 kubelet[2748]: I0115 14:06:37.256249 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb02231-935c-4c3f-b305-68f8681090ff-kube-api-access-nzm55" (OuterVolumeSpecName: "kube-api-access-nzm55") pod "9eb02231-935c-4c3f-b305-68f8681090ff" (UID: "9eb02231-935c-4c3f-b305-68f8681090ff"). InnerVolumeSpecName "kube-api-access-nzm55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 15 14:06:37.256989 kubelet[2748]: I0115 14:06:37.253024 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb02231-935c-4c3f-b305-68f8681090ff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9eb02231-935c-4c3f-b305-68f8681090ff" (UID: "9eb02231-935c-4c3f-b305-68f8681090ff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 15 14:06:37.313417 kubelet[2748]: I0115 14:06:37.313203 2748 scope.go:117] "RemoveContainer" containerID="d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c" Jan 15 14:06:37.322939 containerd[1507]: time="2025-01-15T14:06:37.322874911Z" level=info msg="RemoveContainer for \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\"" Jan 15 14:06:37.326288 systemd[1]: Removed slice kubepods-besteffort-pod9eb02231_935c_4c3f_b305_68f8681090ff.slice - libcontainer container kubepods-besteffort-pod9eb02231_935c_4c3f_b305_68f8681090ff.slice. Jan 15 14:06:37.336194 containerd[1507]: time="2025-01-15T14:06:37.335961561Z" level=info msg="RemoveContainer for \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\" returns successfully" Jan 15 14:06:37.337041 kubelet[2748]: I0115 14:06:37.336713 2748 scope.go:117] "RemoveContainer" containerID="d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c" Jan 15 14:06:37.348332 kubelet[2748]: I0115 14:06:37.347030 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68c76c10-3a34-43b0-9a91-d479185f9266-cilium-config-path\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.348332 kubelet[2748]: I0115 14:06:37.347090 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-lib-modules\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.348332 kubelet[2748]: I0115 14:06:37.347123 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-etc-cni-netd\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.348332 kubelet[2748]: I0115 14:06:37.347156 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-hostproc\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.348332 kubelet[2748]: I0115 14:06:37.347192 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68c76c10-3a34-43b0-9a91-d479185f9266-clustermesh-secrets\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.348332 kubelet[2748]: I0115 14:06:37.347223 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-bpf-maps\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.350105 kubelet[2748]: I0115 14:06:37.347250 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-cni-path\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.350105 kubelet[2748]: I0115 14:06:37.347283 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn8s4\" (UniqueName: \"kubernetes.io/projected/68c76c10-3a34-43b0-9a91-d479185f9266-kube-api-access-sn8s4\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.350105 kubelet[2748]: I0115 14:06:37.347325 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-cilium-cgroup\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.350105 kubelet[2748]: I0115 14:06:37.347356 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-cilium-run\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.350105 kubelet[2748]: I0115 14:06:37.347386 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-host-proc-sys-kernel\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.350105 kubelet[2748]: I0115 14:06:37.347416 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68c76c10-3a34-43b0-9a91-d479185f9266-hubble-tls\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.350544 kubelet[2748]: I0115 14:06:37.347444 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-host-proc-sys-net\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.350544 kubelet[2748]: I0115 14:06:37.347500 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-xtables-lock\") pod \"68c76c10-3a34-43b0-9a91-d479185f9266\" (UID: \"68c76c10-3a34-43b0-9a91-d479185f9266\") " Jan 15 14:06:37.350544 kubelet[2748]: I0115 14:06:37.347593 2748 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9eb02231-935c-4c3f-b305-68f8681090ff-cilium-config-path\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.350544 kubelet[2748]: I0115 14:06:37.347618 2748 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nzm55\" (UniqueName: \"kubernetes.io/projected/9eb02231-935c-4c3f-b305-68f8681090ff-kube-api-access-nzm55\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.350544 kubelet[2748]: I0115 14:06:37.347670 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 15 14:06:37.350544 kubelet[2748]: I0115 14:06:37.347720 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 15 14:06:37.351016 kubelet[2748]: I0115 14:06:37.347750 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 15 14:06:37.351016 kubelet[2748]: I0115 14:06:37.347829 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-hostproc" (OuterVolumeSpecName: "hostproc") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 15 14:06:37.351016 kubelet[2748]: I0115 14:06:37.348740 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 15 14:06:37.351016 kubelet[2748]: I0115 14:06:37.348823 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 15 14:06:37.351016 kubelet[2748]: I0115 14:06:37.349016 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-cni-path" (OuterVolumeSpecName: "cni-path") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 15 14:06:37.352984 kubelet[2748]: I0115 14:06:37.352935 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 15 14:06:37.354141 kubelet[2748]: I0115 14:06:37.354107 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 15 14:06:37.356347 kubelet[2748]: I0115 14:06:37.355406 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 15 14:06:37.377946 kubelet[2748]: I0115 14:06:37.377377 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c76c10-3a34-43b0-9a91-d479185f9266-kube-api-access-sn8s4" (OuterVolumeSpecName: "kube-api-access-sn8s4") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "kube-api-access-sn8s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 15 14:06:37.377946 kubelet[2748]: I0115 14:06:37.377579 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c76c10-3a34-43b0-9a91-d479185f9266-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 15 14:06:37.378948 kubelet[2748]: I0115 14:06:37.377855 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c76c10-3a34-43b0-9a91-d479185f9266-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 15 14:06:37.380723 containerd[1507]: time="2025-01-15T14:06:37.343307889Z" level=error msg="ContainerStatus for \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\": not found" Jan 15 14:06:37.381800 kubelet[2748]: I0115 14:06:37.381039 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68c76c10-3a34-43b0-9a91-d479185f9266-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "68c76c10-3a34-43b0-9a91-d479185f9266" (UID: "68c76c10-3a34-43b0-9a91-d479185f9266"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 15 14:06:37.382878 kubelet[2748]: E0115 14:06:37.382823 2748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\": not found" containerID="d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c" Jan 15 14:06:37.389677 kubelet[2748]: I0115 14:06:37.389521 2748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c"} err="failed to get container status \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d80610e532730a443544477628f9ce59f825c9b027ceb95a7f5cf3907c76f08c\": not found" Jan 15 14:06:37.389677 kubelet[2748]: I0115 14:06:37.389659 2748 scope.go:117] "RemoveContainer" containerID="4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81" Jan 15 14:06:37.392908 containerd[1507]: time="2025-01-15T14:06:37.392854959Z" level=info msg="RemoveContainer for \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\"" Jan 15 14:06:37.413952 containerd[1507]: time="2025-01-15T14:06:37.413820001Z" level=info msg="RemoveContainer for \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\" returns successfully" Jan 15 14:06:37.414701 kubelet[2748]: I0115 14:06:37.414639 2748 scope.go:117] "RemoveContainer" containerID="09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6" Jan 15 14:06:37.416814 containerd[1507]: time="2025-01-15T14:06:37.416684044Z" level=info msg="RemoveContainer for \"09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6\"" Jan 15 14:06:37.421157 containerd[1507]: time="2025-01-15T14:06:37.421091145Z" level=info msg="RemoveContainer for \"09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6\" returns successfully" Jan 15 14:06:37.421494 kubelet[2748]: I0115 14:06:37.421366 2748 scope.go:117] "RemoveContainer" containerID="332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c" Jan 15 14:06:37.422996 containerd[1507]: time="2025-01-15T14:06:37.422948845Z" level=info msg="RemoveContainer for \"332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c\"" Jan 15 14:06:37.432662 containerd[1507]: time="2025-01-15T14:06:37.432530356Z" level=info msg="RemoveContainer for \"332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c\" returns successfully" Jan 15 14:06:37.433304 kubelet[2748]: I0115 14:06:37.433001 2748 scope.go:117] "RemoveContainer" containerID="ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0" Jan 15 14:06:37.434422 containerd[1507]: time="2025-01-15T14:06:37.434377170Z" level=info msg="RemoveContainer for \"ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0\"" Jan 15 14:06:37.437504 containerd[1507]: time="2025-01-15T14:06:37.437464845Z" level=info msg="RemoveContainer for \"ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0\" returns successfully" Jan 15 14:06:37.437888 kubelet[2748]: I0115 14:06:37.437780 2748 scope.go:117] "RemoveContainer" containerID="05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4" Jan 15 14:06:37.439386 containerd[1507]: time="2025-01-15T14:06:37.439016103Z" level=info msg="RemoveContainer for \"05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4\"" Jan 15 14:06:37.442587 containerd[1507]: time="2025-01-15T14:06:37.442554794Z" level=info msg="RemoveContainer for \"05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4\" returns successfully" Jan 15 14:06:37.442926 kubelet[2748]: I0115 14:06:37.442903 2748 scope.go:117] "RemoveContainer" containerID="4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81" Jan 15 14:06:37.443497 containerd[1507]: time="2025-01-15T14:06:37.443402716Z" level=error msg="ContainerStatus for \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\": not found" Jan 15 14:06:37.443707 kubelet[2748]: E0115 14:06:37.443607 2748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\": not found" containerID="4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81" Jan 15 14:06:37.443707 kubelet[2748]: I0115 14:06:37.443674 2748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81"} err="failed to get container status \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\": rpc error: code = NotFound desc = an error occurred when try to find container \"4dc8accf08fe323c160fe8a35141f01c5378050ad6faab271496d8befc403b81\": not found" Jan 15 14:06:37.443959 kubelet[2748]: I0115 14:06:37.443716 2748 scope.go:117] "RemoveContainer" containerID="09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6" Jan 15 14:06:37.444650 containerd[1507]: time="2025-01-15T14:06:37.444169399Z" level=error msg="ContainerStatus for \"09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6\": not found" Jan 15 14:06:37.444731 kubelet[2748]: E0115 14:06:37.444474 2748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6\": not found" containerID="09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6" Jan 15 14:06:37.444731 kubelet[2748]: I0115 14:06:37.444553 2748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6"} err="failed to get container status \"09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6\": rpc error: code = NotFound desc = an error occurred when try to find container \"09f43c12f72833892330a94ce7156bc35579b366fe0b00394200940e6fa1ffe6\": not found" Jan 15 14:06:37.444731 kubelet[2748]: I0115 14:06:37.444572 2748 scope.go:117] "RemoveContainer" containerID="332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c" Jan 15 14:06:37.445170 containerd[1507]: time="2025-01-15T14:06:37.445071375Z" level=error msg="ContainerStatus for \"332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c\": not found" Jan 15 14:06:37.445311 kubelet[2748]: E0115 14:06:37.445214 2748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c\": not found" containerID="332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c" Jan 15 14:06:37.445311 kubelet[2748]: I0115 14:06:37.445251 2748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c"} err="failed to get container status \"332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"332fbacc44bf0e9dbea775e5639319e01bcabd970f4704e90fec1c8ec3a41e3c\": not found" Jan 15 14:06:37.445311 kubelet[2748]: I0115 14:06:37.445269 2748 scope.go:117] "RemoveContainer" containerID="ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0" Jan 15 14:06:37.445811 containerd[1507]: time="2025-01-15T14:06:37.445568424Z" level=error msg="ContainerStatus for \"ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0\": not found" Jan 15 14:06:37.445878 kubelet[2748]: E0115 14:06:37.445833 2748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0\": not found" containerID="ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0" Jan 15 14:06:37.445957 kubelet[2748]: I0115 14:06:37.445893 2748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0"} err="failed to get container status \"ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef9853a81d25ba55ca5b4a7ce6446a4f3b13cdd8c956534112444f27f85ee5f0\": not found" Jan 15 14:06:37.445957 kubelet[2748]: I0115 14:06:37.445911 2748 scope.go:117] "RemoveContainer" containerID="05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4" Jan 15 14:06:37.446412 containerd[1507]: time="2025-01-15T14:06:37.446177589Z" level=error msg="ContainerStatus for \"05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4\": not found" Jan 15 14:06:37.446721 kubelet[2748]: E0115 14:06:37.446620 2748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4\": not found" containerID="05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4" Jan 15 14:06:37.446721 kubelet[2748]: I0115 14:06:37.446657 2748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4"} err="failed to get container status \"05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4\": rpc error: code = NotFound desc = an error occurred when try to find container \"05abd4b3fa728588298d39cf09dece9d349a30b5c0ac8c80a1600c5715ccfdf4\": not found" Jan 15 14:06:37.452880 kubelet[2748]: I0115 14:06:37.452525 2748 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-etc-cni-netd\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.453127 kubelet[2748]: I0115 14:06:37.452932 2748 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-hostproc\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.453127 kubelet[2748]: I0115 14:06:37.452996 2748 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68c76c10-3a34-43b0-9a91-d479185f9266-clustermesh-secrets\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.453127 kubelet[2748]: I0115 14:06:37.453016 2748 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-bpf-maps\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.453127 kubelet[2748]: I0115 14:06:37.453065 2748 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-cni-path\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.453127 kubelet[2748]: I0115 14:06:37.453104 2748 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sn8s4\" (UniqueName: \"kubernetes.io/projected/68c76c10-3a34-43b0-9a91-d479185f9266-kube-api-access-sn8s4\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.453425 kubelet[2748]: I0115 14:06:37.453125 2748 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-cilium-cgroup\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.453425 kubelet[2748]: I0115 14:06:37.453188 2748 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-cilium-run\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.453425 kubelet[2748]: I0115 14:06:37.453204 2748 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68c76c10-3a34-43b0-9a91-d479185f9266-hubble-tls\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.453425 kubelet[2748]: I0115 14:06:37.453281 2748 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-host-proc-sys-kernel\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.453425 kubelet[2748]: I0115 14:06:37.453303 2748 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-host-proc-sys-net\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.453425 kubelet[2748]: I0115 14:06:37.453340 2748 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-xtables-lock\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.453425 kubelet[2748]: I0115 14:06:37.453384 2748 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68c76c10-3a34-43b0-9a91-d479185f9266-cilium-config-path\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.454111 kubelet[2748]: I0115 14:06:37.453440 2748 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68c76c10-3a34-43b0-9a91-d479185f9266-lib-modules\") on node \"srv-6ftsm.gb1.brightbox.com\" DevicePath \"\"" Jan 15 14:06:37.642950 systemd[1]: Removed slice kubepods-burstable-pod68c76c10_3a34_43b0_9a91_d479185f9266.slice - libcontainer container kubepods-burstable-pod68c76c10_3a34_43b0_9a91_d479185f9266.slice. Jan 15 14:06:37.643157 systemd[1]: kubepods-burstable-pod68c76c10_3a34_43b0_9a91_d479185f9266.slice: Consumed 10.938s CPU time. Jan 15 14:06:37.779118 systemd[1]: var-lib-kubelet-pods-9eb02231\x2d935c\x2d4c3f\x2db305\x2d68f8681090ff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnzm55.mount: Deactivated successfully. Jan 15 14:06:37.779330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86-rootfs.mount: Deactivated successfully. Jan 15 14:06:37.779473 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-526ead205d73847aaca9ca1c5c289d7389c9979165eaa080435aea39904c1a86-shm.mount: Deactivated successfully. Jan 15 14:06:37.779599 systemd[1]: var-lib-kubelet-pods-68c76c10\x2d3a34\x2d43b0\x2d9a91\x2dd479185f9266-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsn8s4.mount: Deactivated successfully. Jan 15 14:06:37.779705 systemd[1]: var-lib-kubelet-pods-68c76c10\x2d3a34\x2d43b0\x2d9a91\x2dd479185f9266-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 15 14:06:37.779853 systemd[1]: var-lib-kubelet-pods-68c76c10\x2d3a34\x2d43b0\x2d9a91\x2dd479185f9266-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 15 14:06:38.677961 kubelet[2748]: I0115 14:06:38.677909 2748 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="68c76c10-3a34-43b0-9a91-d479185f9266" path="/var/lib/kubelet/pods/68c76c10-3a34-43b0-9a91-d479185f9266/volumes" Jan 15 14:06:38.679364 kubelet[2748]: I0115 14:06:38.679342 2748 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9eb02231-935c-4c3f-b305-68f8681090ff" path="/var/lib/kubelet/pods/9eb02231-935c-4c3f-b305-68f8681090ff/volumes" Jan 15 14:06:38.710742 sshd[4357]: pam_unix(sshd:session): session closed for user core Jan 15 14:06:38.717256 systemd[1]: sshd@25-10.230.66.178:22-147.75.109.163:45844.service: Deactivated successfully. Jan 15 14:06:38.720137 systemd[1]: session-28.scope: Deactivated successfully. Jan 15 14:06:38.721325 systemd-logind[1489]: Session 28 logged out. Waiting for processes to exit. Jan 15 14:06:38.723540 systemd-logind[1489]: Removed session 28. Jan 15 14:06:38.870210 systemd[1]: Started sshd@26-10.230.66.178:22-147.75.109.163:47262.service - OpenSSH per-connection server daemon (147.75.109.163:47262). Jan 15 14:06:39.774242 sshd[4519]: Accepted publickey for core from 147.75.109.163 port 47262 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:06:39.779071 sshd[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:06:39.790931 systemd-logind[1489]: New session 29 of user core. Jan 15 14:06:39.800981 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 15 14:06:39.950409 kubelet[2748]: E0115 14:06:39.950294 2748 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 15 14:06:40.862371 kubelet[2748]: I0115 14:06:40.862289 2748 topology_manager.go:215] "Topology Admit Handler" podUID="547c4a18-1a01-4568-afc3-cfc7d62c3c64" podNamespace="kube-system" podName="cilium-6d9j6" Jan 15 14:06:40.862592 kubelet[2748]: E0115 14:06:40.862578 2748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68c76c10-3a34-43b0-9a91-d479185f9266" containerName="apply-sysctl-overwrites" Jan 15 14:06:40.862752 kubelet[2748]: E0115 14:06:40.862639 2748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68c76c10-3a34-43b0-9a91-d479185f9266" containerName="mount-bpf-fs" Jan 15 14:06:40.862752 kubelet[2748]: E0115 14:06:40.862657 2748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68c76c10-3a34-43b0-9a91-d479185f9266" containerName="clean-cilium-state" Jan 15 14:06:40.862752 kubelet[2748]: E0115 14:06:40.862671 2748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68c76c10-3a34-43b0-9a91-d479185f9266" containerName="mount-cgroup" Jan 15 14:06:40.862752 kubelet[2748]: E0115 14:06:40.862683 2748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68c76c10-3a34-43b0-9a91-d479185f9266" containerName="cilium-agent" Jan 15 14:06:40.862752 kubelet[2748]: E0115 14:06:40.862701 2748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9eb02231-935c-4c3f-b305-68f8681090ff" containerName="cilium-operator" Jan 15 14:06:40.868923 kubelet[2748]: I0115 14:06:40.868869 2748 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c76c10-3a34-43b0-9a91-d479185f9266" containerName="cilium-agent" Jan 15 14:06:40.868923 kubelet[2748]: I0115 14:06:40.868928 2748 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb02231-935c-4c3f-b305-68f8681090ff" containerName="cilium-operator" Jan 15 14:06:40.888979 kubelet[2748]: I0115 14:06:40.887786 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/547c4a18-1a01-4568-afc3-cfc7d62c3c64-etc-cni-netd\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.888979 kubelet[2748]: I0115 14:06:40.887898 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/547c4a18-1a01-4568-afc3-cfc7d62c3c64-cilium-ipsec-secrets\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.888979 kubelet[2748]: I0115 14:06:40.887971 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/547c4a18-1a01-4568-afc3-cfc7d62c3c64-host-proc-sys-net\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.888979 kubelet[2748]: I0115 14:06:40.888014 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/547c4a18-1a01-4568-afc3-cfc7d62c3c64-host-proc-sys-kernel\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.888979 kubelet[2748]: I0115 14:06:40.888055 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/547c4a18-1a01-4568-afc3-cfc7d62c3c64-xtables-lock\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.888979 kubelet[2748]: I0115 14:06:40.888094 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/547c4a18-1a01-4568-afc3-cfc7d62c3c64-hubble-tls\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.889644 kubelet[2748]: I0115 14:06:40.888148 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/547c4a18-1a01-4568-afc3-cfc7d62c3c64-cilium-config-path\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.889644 kubelet[2748]: I0115 14:06:40.888194 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/547c4a18-1a01-4568-afc3-cfc7d62c3c64-cilium-run\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.889644 kubelet[2748]: I0115 14:06:40.888237 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/547c4a18-1a01-4568-afc3-cfc7d62c3c64-clustermesh-secrets\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.889644 kubelet[2748]: I0115 14:06:40.888293 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/547c4a18-1a01-4568-afc3-cfc7d62c3c64-bpf-maps\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.889644 kubelet[2748]: I0115 14:06:40.888359 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/547c4a18-1a01-4568-afc3-cfc7d62c3c64-cilium-cgroup\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.889644 kubelet[2748]: I0115 14:06:40.888397 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/547c4a18-1a01-4568-afc3-cfc7d62c3c64-hostproc\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.889949 kubelet[2748]: I0115 14:06:40.888435 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/547c4a18-1a01-4568-afc3-cfc7d62c3c64-cni-path\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.889949 kubelet[2748]: I0115 14:06:40.888469 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/547c4a18-1a01-4568-afc3-cfc7d62c3c64-lib-modules\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.889949 kubelet[2748]: I0115 14:06:40.888530 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtkdc\" (UniqueName: \"kubernetes.io/projected/547c4a18-1a01-4568-afc3-cfc7d62c3c64-kube-api-access-dtkdc\") pod \"cilium-6d9j6\" (UID: \"547c4a18-1a01-4568-afc3-cfc7d62c3c64\") " pod="kube-system/cilium-6d9j6" Jan 15 14:06:40.908853 systemd[1]: Created slice kubepods-burstable-pod547c4a18_1a01_4568_afc3_cfc7d62c3c64.slice - libcontainer container kubepods-burstable-pod547c4a18_1a01_4568_afc3_cfc7d62c3c64.slice. Jan 15 14:06:40.967571 sshd[4519]: pam_unix(sshd:session): session closed for user core Jan 15 14:06:40.974384 systemd[1]: sshd@26-10.230.66.178:22-147.75.109.163:47262.service: Deactivated successfully. Jan 15 14:06:40.977593 systemd[1]: session-29.scope: Deactivated successfully. Jan 15 14:06:40.979152 systemd-logind[1489]: Session 29 logged out. Waiting for processes to exit. Jan 15 14:06:40.980624 systemd-logind[1489]: Removed session 29. Jan 15 14:06:41.130287 systemd[1]: Started sshd@27-10.230.66.178:22-147.75.109.163:47268.service - OpenSSH per-connection server daemon (147.75.109.163:47268). Jan 15 14:06:41.236171 containerd[1507]: time="2025-01-15T14:06:41.236036039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6d9j6,Uid:547c4a18-1a01-4568-afc3-cfc7d62c3c64,Namespace:kube-system,Attempt:0,}" Jan 15 14:06:41.286521 containerd[1507]: time="2025-01-15T14:06:41.285490637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 14:06:41.286521 containerd[1507]: time="2025-01-15T14:06:41.285747287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 14:06:41.286521 containerd[1507]: time="2025-01-15T14:06:41.285851061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:06:41.287109 containerd[1507]: time="2025-01-15T14:06:41.286487872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 14:06:41.335058 systemd[1]: Started cri-containerd-8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2.scope - libcontainer container 8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2. Jan 15 14:06:41.377530 containerd[1507]: time="2025-01-15T14:06:41.377458448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6d9j6,Uid:547c4a18-1a01-4568-afc3-cfc7d62c3c64,Namespace:kube-system,Attempt:0,} returns sandbox id \"8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2\"" Jan 15 14:06:41.390181 containerd[1507]: time="2025-01-15T14:06:41.389857265Z" level=info msg="CreateContainer within sandbox \"8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 15 14:06:41.405784 containerd[1507]: time="2025-01-15T14:06:41.405641038Z" level=info msg="CreateContainer within sandbox \"8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"285f1988c2ac0bb055b5b0b89d254d53be48871f95e85b0d985868b3949c9472\"" Jan 15 14:06:41.407806 containerd[1507]: time="2025-01-15T14:06:41.406449110Z" level=info msg="StartContainer for \"285f1988c2ac0bb055b5b0b89d254d53be48871f95e85b0d985868b3949c9472\"" Jan 15 14:06:41.445011 systemd[1]: Started cri-containerd-285f1988c2ac0bb055b5b0b89d254d53be48871f95e85b0d985868b3949c9472.scope - libcontainer container 285f1988c2ac0bb055b5b0b89d254d53be48871f95e85b0d985868b3949c9472. Jan 15 14:06:41.488217 containerd[1507]: time="2025-01-15T14:06:41.488118582Z" level=info msg="StartContainer for \"285f1988c2ac0bb055b5b0b89d254d53be48871f95e85b0d985868b3949c9472\" returns successfully" Jan 15 14:06:41.514996 systemd[1]: cri-containerd-285f1988c2ac0bb055b5b0b89d254d53be48871f95e85b0d985868b3949c9472.scope: Deactivated successfully. Jan 15 14:06:41.569958 containerd[1507]: time="2025-01-15T14:06:41.569573065Z" level=info msg="shim disconnected" id=285f1988c2ac0bb055b5b0b89d254d53be48871f95e85b0d985868b3949c9472 namespace=k8s.io Jan 15 14:06:41.569958 containerd[1507]: time="2025-01-15T14:06:41.569808711Z" level=warning msg="cleaning up after shim disconnected" id=285f1988c2ac0bb055b5b0b89d254d53be48871f95e85b0d985868b3949c9472 namespace=k8s.io Jan 15 14:06:41.569958 containerd[1507]: time="2025-01-15T14:06:41.569830827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:06:42.018127 systemd[1]: run-containerd-runc-k8s.io-8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2-runc.GI07Ns.mount: Deactivated successfully. Jan 15 14:06:42.034408 sshd[4535]: Accepted publickey for core from 147.75.109.163 port 47268 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:06:42.037637 sshd[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:06:42.044668 systemd-logind[1489]: New session 30 of user core. Jan 15 14:06:42.050969 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 15 14:06:42.368899 containerd[1507]: time="2025-01-15T14:06:42.368165805Z" level=info msg="CreateContainer within sandbox \"8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 15 14:06:42.393941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156289769.mount: Deactivated successfully. Jan 15 14:06:42.401123 containerd[1507]: time="2025-01-15T14:06:42.401005449Z" level=info msg="CreateContainer within sandbox \"8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a146c2b3ca67582ac63d5eb2020757217c7878ffe2897db4b7b7dedab7cd3135\"" Jan 15 14:06:42.410074 containerd[1507]: time="2025-01-15T14:06:42.410009977Z" level=info msg="StartContainer for \"a146c2b3ca67582ac63d5eb2020757217c7878ffe2897db4b7b7dedab7cd3135\"" Jan 15 14:06:42.460019 systemd[1]: Started cri-containerd-a146c2b3ca67582ac63d5eb2020757217c7878ffe2897db4b7b7dedab7cd3135.scope - libcontainer container a146c2b3ca67582ac63d5eb2020757217c7878ffe2897db4b7b7dedab7cd3135. Jan 15 14:06:42.534460 containerd[1507]: time="2025-01-15T14:06:42.534329847Z" level=info msg="StartContainer for \"a146c2b3ca67582ac63d5eb2020757217c7878ffe2897db4b7b7dedab7cd3135\" returns successfully" Jan 15 14:06:42.557444 systemd[1]: cri-containerd-a146c2b3ca67582ac63d5eb2020757217c7878ffe2897db4b7b7dedab7cd3135.scope: Deactivated successfully. Jan 15 14:06:42.599783 containerd[1507]: time="2025-01-15T14:06:42.599561358Z" level=info msg="shim disconnected" id=a146c2b3ca67582ac63d5eb2020757217c7878ffe2897db4b7b7dedab7cd3135 namespace=k8s.io Jan 15 14:06:42.599783 containerd[1507]: time="2025-01-15T14:06:42.599706052Z" level=warning msg="cleaning up after shim disconnected" id=a146c2b3ca67582ac63d5eb2020757217c7878ffe2897db4b7b7dedab7cd3135 namespace=k8s.io Jan 15 14:06:42.599783 containerd[1507]: time="2025-01-15T14:06:42.599724292Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:06:42.668639 sshd[4535]: pam_unix(sshd:session): session closed for user core Jan 15 14:06:42.679048 systemd[1]: sshd@27-10.230.66.178:22-147.75.109.163:47268.service: Deactivated successfully. Jan 15 14:06:42.683663 systemd[1]: session-30.scope: Deactivated successfully. Jan 15 14:06:42.684912 systemd-logind[1489]: Session 30 logged out. Waiting for processes to exit. Jan 15 14:06:42.686418 systemd-logind[1489]: Removed session 30. Jan 15 14:06:42.829173 systemd[1]: Started sshd@28-10.230.66.178:22-147.75.109.163:47280.service - OpenSSH per-connection server daemon (147.75.109.163:47280). Jan 15 14:06:43.017259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a146c2b3ca67582ac63d5eb2020757217c7878ffe2897db4b7b7dedab7cd3135-rootfs.mount: Deactivated successfully. Jan 15 14:06:43.365314 containerd[1507]: time="2025-01-15T14:06:43.364053741Z" level=info msg="CreateContainer within sandbox \"8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 15 14:06:43.411896 containerd[1507]: time="2025-01-15T14:06:43.411822162Z" level=info msg="CreateContainer within sandbox \"8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3bffaed256d06ac98d049574b98bbfaa1843079a560d9769224762ef52726f54\"" Jan 15 14:06:43.416831 containerd[1507]: time="2025-01-15T14:06:43.414908349Z" level=info msg="StartContainer for \"3bffaed256d06ac98d049574b98bbfaa1843079a560d9769224762ef52726f54\"" Jan 15 14:06:43.470137 systemd[1]: Started cri-containerd-3bffaed256d06ac98d049574b98bbfaa1843079a560d9769224762ef52726f54.scope - libcontainer container 3bffaed256d06ac98d049574b98bbfaa1843079a560d9769224762ef52726f54. Jan 15 14:06:43.529123 containerd[1507]: time="2025-01-15T14:06:43.528088771Z" level=info msg="StartContainer for \"3bffaed256d06ac98d049574b98bbfaa1843079a560d9769224762ef52726f54\" returns successfully" Jan 15 14:06:43.543896 systemd[1]: cri-containerd-3bffaed256d06ac98d049574b98bbfaa1843079a560d9769224762ef52726f54.scope: Deactivated successfully. Jan 15 14:06:43.582756 containerd[1507]: time="2025-01-15T14:06:43.582594490Z" level=info msg="shim disconnected" id=3bffaed256d06ac98d049574b98bbfaa1843079a560d9769224762ef52726f54 namespace=k8s.io Jan 15 14:06:43.582756 containerd[1507]: time="2025-01-15T14:06:43.582746217Z" level=warning msg="cleaning up after shim disconnected" id=3bffaed256d06ac98d049574b98bbfaa1843079a560d9769224762ef52726f54 namespace=k8s.io Jan 15 14:06:43.583330 containerd[1507]: time="2025-01-15T14:06:43.582807858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:06:43.602260 containerd[1507]: time="2025-01-15T14:06:43.602147172Z" level=warning msg="cleanup warnings time=\"2025-01-15T14:06:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 15 14:06:43.721701 sshd[4707]: Accepted publickey for core from 147.75.109.163 port 47280 ssh2: RSA SHA256:yhnrVaQ6ubHMaiRHrttc+bh72AQMS/h1RjuSsQ1sZRA Jan 15 14:06:43.723973 sshd[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 14:06:43.730158 systemd-logind[1489]: New session 31 of user core. Jan 15 14:06:43.735982 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 15 14:06:44.016720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bffaed256d06ac98d049574b98bbfaa1843079a560d9769224762ef52726f54-rootfs.mount: Deactivated successfully. Jan 15 14:06:44.375021 containerd[1507]: time="2025-01-15T14:06:44.374680302Z" level=info msg="CreateContainer within sandbox \"8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 15 14:06:44.397001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1496913427.mount: Deactivated successfully. Jan 15 14:06:44.399171 containerd[1507]: time="2025-01-15T14:06:44.399123597Z" level=info msg="CreateContainer within sandbox \"8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aadd6ffe5816b37223e1c261fd862dbd78aef53fa444194cf336f9be415387e1\"" Jan 15 14:06:44.402305 containerd[1507]: time="2025-01-15T14:06:44.401697017Z" level=info msg="StartContainer for \"aadd6ffe5816b37223e1c261fd862dbd78aef53fa444194cf336f9be415387e1\"" Jan 15 14:06:44.467023 systemd[1]: Started cri-containerd-aadd6ffe5816b37223e1c261fd862dbd78aef53fa444194cf336f9be415387e1.scope - libcontainer container aadd6ffe5816b37223e1c261fd862dbd78aef53fa444194cf336f9be415387e1. Jan 15 14:06:44.516058 containerd[1507]: time="2025-01-15T14:06:44.516006386Z" level=info msg="StartContainer for \"aadd6ffe5816b37223e1c261fd862dbd78aef53fa444194cf336f9be415387e1\" returns successfully" Jan 15 14:06:44.516993 systemd[1]: cri-containerd-aadd6ffe5816b37223e1c261fd862dbd78aef53fa444194cf336f9be415387e1.scope: Deactivated successfully. Jan 15 14:06:44.570541 containerd[1507]: time="2025-01-15T14:06:44.570466115Z" level=info msg="shim disconnected" id=aadd6ffe5816b37223e1c261fd862dbd78aef53fa444194cf336f9be415387e1 namespace=k8s.io Jan 15 14:06:44.570541 containerd[1507]: time="2025-01-15T14:06:44.570544133Z" level=warning msg="cleaning up after shim disconnected" id=aadd6ffe5816b37223e1c261fd862dbd78aef53fa444194cf336f9be415387e1 namespace=k8s.io Jan 15 14:06:44.570981 containerd[1507]: time="2025-01-15T14:06:44.570560654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 14:06:44.952707 kubelet[2748]: E0115 14:06:44.952581 2748 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 15 14:06:45.017220 systemd[1]: run-containerd-runc-k8s.io-aadd6ffe5816b37223e1c261fd862dbd78aef53fa444194cf336f9be415387e1-runc.IYgDVY.mount: Deactivated successfully. Jan 15 14:06:45.017739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aadd6ffe5816b37223e1c261fd862dbd78aef53fa444194cf336f9be415387e1-rootfs.mount: Deactivated successfully. Jan 15 14:06:45.383862 containerd[1507]: time="2025-01-15T14:06:45.383367236Z" level=info msg="CreateContainer within sandbox \"8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 15 14:06:45.413127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518238553.mount: Deactivated successfully. Jan 15 14:06:45.416722 containerd[1507]: time="2025-01-15T14:06:45.416224543Z" level=info msg="CreateContainer within sandbox \"8461a8915f0c5fa1debe76c8a9a3f6222890a239eaf053b4ef32b65ea4d46ca2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a87eba4c2d4eafd5e2485d02d4395677b736fcfaaf1da2a2445fcec71ece3c10\"" Jan 15 14:06:45.419624 containerd[1507]: time="2025-01-15T14:06:45.419057372Z" level=info msg="StartContainer for \"a87eba4c2d4eafd5e2485d02d4395677b736fcfaaf1da2a2445fcec71ece3c10\"" Jan 15 14:06:45.486058 systemd[1]: Started cri-containerd-a87eba4c2d4eafd5e2485d02d4395677b736fcfaaf1da2a2445fcec71ece3c10.scope - libcontainer container a87eba4c2d4eafd5e2485d02d4395677b736fcfaaf1da2a2445fcec71ece3c10. Jan 15 14:06:45.559700 containerd[1507]: time="2025-01-15T14:06:45.559469307Z" level=info msg="StartContainer for \"a87eba4c2d4eafd5e2485d02d4395677b736fcfaaf1da2a2445fcec71ece3c10\" returns successfully" Jan 15 14:06:46.017431 systemd[1]: run-containerd-runc-k8s.io-a87eba4c2d4eafd5e2485d02d4395677b736fcfaaf1da2a2445fcec71ece3c10-runc.uQ4v9i.mount: Deactivated successfully. Jan 15 14:06:46.337983 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 15 14:06:46.455537 kubelet[2748]: I0115 14:06:46.454285 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6d9j6" podStartSLOduration=6.454136143 podStartE2EDuration="6.454136143s" podCreationTimestamp="2025-01-15 14:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 14:06:46.453631051 +0000 UTC m=+162.020643371" watchObservedRunningTime="2025-01-15 14:06:46.454136143 +0000 UTC m=+162.021148458" Jan 15 14:06:47.640822 kubelet[2748]: I0115 14:06:47.637019 2748 setters.go:568] "Node became not ready" node="srv-6ftsm.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-15T14:06:47Z","lastTransitionTime":"2025-01-15T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 15 14:06:48.828043 systemd[1]: run-containerd-runc-k8s.io-a87eba4c2d4eafd5e2485d02d4395677b736fcfaaf1da2a2445fcec71ece3c10-runc.4Dwt1i.mount: Deactivated successfully. Jan 15 14:06:48.969848 kubelet[2748]: E0115 14:06:48.969132 2748 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58464->127.0.0.1:46301: write tcp 127.0.0.1:58464->127.0.0.1:46301: write: broken pipe Jan 15 14:06:50.425375 systemd-networkd[1423]: lxc_health: Link UP Jan 15 14:06:50.469829 systemd-networkd[1423]: lxc_health: Gained carrier Jan 15 14:06:51.198087 kubelet[2748]: E0115 14:06:51.198017 2748 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58468->127.0.0.1:46301: write tcp 127.0.0.1:58468->127.0.0.1:46301: write: broken pipe Jan 15 14:06:51.750594 systemd-networkd[1423]: lxc_health: Gained IPv6LL Jan 15 14:06:55.906213 sshd[4707]: pam_unix(sshd:session): session closed for user core Jan 15 14:06:55.925343 systemd[1]: sshd@28-10.230.66.178:22-147.75.109.163:47280.service: Deactivated successfully. Jan 15 14:06:55.930623 systemd[1]: session-31.scope: Deactivated successfully. Jan 15 14:06:55.934561 systemd-logind[1489]: Session 31 logged out. Waiting for processes to exit. Jan 15 14:06:55.939425 systemd-logind[1489]: Removed session 31.