Dec 13 13:56:50.044409 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:56:50.044472 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:56:50.044488 kernel: BIOS-provided physical RAM map: Dec 13 13:56:50.044506 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 13:56:50.044517 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 13:56:50.044528 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 13:56:50.044540 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 13:56:50.044551 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 13:56:50.044562 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 13:56:50.044573 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 13:56:50.044585 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:56:50.044596 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 13:56:50.044611 kernel: NX (Execute Disable) protection: active Dec 13 13:56:50.044623 kernel: APIC: Static calls initialized Dec 13 13:56:50.044636 kernel: SMBIOS 2.8 present. Dec 13 13:56:50.044648 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 13:56:50.044660 kernel: Hypervisor detected: KVM Dec 13 13:56:50.044677 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:56:50.044689 kernel: kvm-clock: using sched offset of 4431428401 cycles Dec 13 13:56:50.044702 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:56:50.044715 kernel: tsc: Detected 2499.998 MHz processor Dec 13 13:56:50.044727 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:56:50.044740 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:56:50.044751 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 13:56:50.046251 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 13:56:50.046268 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:56:50.046288 kernel: Using GB pages for direct mapping Dec 13 13:56:50.046301 kernel: ACPI: Early table checksum verification disabled Dec 13 13:56:50.046313 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 13:56:50.046326 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:56:50.046339 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:56:50.046363 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:56:50.046377 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 13:56:50.046389 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:56:50.046402 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:56:50.046428 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:56:50.046441 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:56:50.046454 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 13:56:50.046478 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 13:56:50.046491 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 13:56:50.046510 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 13:56:50.046523 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 13:56:50.046540 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 13:56:50.046553 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 13:56:50.046566 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 13:56:50.046579 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 13:56:50.046591 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 13:56:50.046604 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 13:56:50.046616 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 13:56:50.046629 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 13:56:50.046646 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 13:56:50.046659 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 13:56:50.046671 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 13:56:50.046684 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 13:56:50.046696 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 13:56:50.046708 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 13:56:50.046721 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 13:56:50.046733 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 13:56:50.046745 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 13:56:50.046842 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 13:56:50.046867 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 13:56:50.046880 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 13:56:50.046893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 13:56:50.046905 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 13:56:50.046918 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 13:56:50.046931 kernel: Zone ranges: Dec 13 13:56:50.046944 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:56:50.046957 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 13:56:50.046969 kernel: Normal empty Dec 13 13:56:50.046987 kernel: Movable zone start for each node Dec 13 13:56:50.047000 kernel: Early memory node ranges Dec 13 13:56:50.047012 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 13:56:50.047025 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 13:56:50.047037 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 13:56:50.047050 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:56:50.047063 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 13:56:50.047076 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 13:56:50.047088 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 13:56:50.047106 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:56:50.047119 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:56:50.047131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 13:56:50.047144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:56:50.047157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:56:50.047169 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:56:50.047182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:56:50.047194 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:56:50.047207 kernel: TSC deadline timer available Dec 13 13:56:50.047224 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 13:56:50.047237 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 13:56:50.047250 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 13:56:50.047262 kernel: Booting paravirtualized kernel on KVM Dec 13 13:56:50.047275 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:56:50.047288 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 13 13:56:50.047301 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 13:56:50.047313 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 13:56:50.047326 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 13:56:50.047343 kernel: kvm-guest: PV spinlocks enabled Dec 13 13:56:50.047356 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:56:50.047370 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:56:50.047383 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:56:50.047396 kernel: random: crng init done Dec 13 13:56:50.047408 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:56:50.047421 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:56:50.047434 kernel: Fallback order for Node 0: 0 Dec 13 13:56:50.047451 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 13:56:50.047477 kernel: Policy zone: DMA32 Dec 13 13:56:50.047490 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:56:50.047503 kernel: software IO TLB: area num 16. Dec 13 13:56:50.047516 kernel: Memory: 1899484K/2096616K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 196872K reserved, 0K cma-reserved) Dec 13 13:56:50.047529 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 13:56:50.047541 kernel: Kernel/User page tables isolation: enabled Dec 13 13:56:50.047554 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:56:50.047566 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:56:50.047584 kernel: Dynamic Preempt: voluntary Dec 13 13:56:50.047597 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:56:50.047615 kernel: rcu: RCU event tracing is enabled. Dec 13 13:56:50.047629 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 13:56:50.047642 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:56:50.047667 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:56:50.047685 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:56:50.047698 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:56:50.047721 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 13:56:50.047735 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 13:56:50.047748 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:56:50.047774 kernel: Console: colour VGA+ 80x25 Dec 13 13:56:50.047813 kernel: printk: console [tty0] enabled Dec 13 13:56:50.047828 kernel: printk: console [ttyS0] enabled Dec 13 13:56:50.047841 kernel: ACPI: Core revision 20230628 Dec 13 13:56:50.047854 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:56:50.047867 kernel: x2apic enabled Dec 13 13:56:50.047894 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:56:50.047908 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 13:56:50.047922 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 13:56:50.047935 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 13:56:50.047948 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 13:56:50.047961 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 13:56:50.047974 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:56:50.047987 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:56:50.048000 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:56:50.048013 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:56:50.048031 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 13:56:50.048045 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:56:50.048058 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:56:50.048070 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 13:56:50.048083 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 13:56:50.048096 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 13:56:50.048109 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:56:50.048122 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:56:50.048135 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:56:50.048148 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:56:50.048165 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 13:56:50.048179 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:56:50.048192 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:56:50.048205 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:56:50.048218 kernel: landlock: Up and running. Dec 13 13:56:50.048231 kernel: SELinux: Initializing. Dec 13 13:56:50.048244 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:56:50.048257 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:56:50.048270 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 13:56:50.048283 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 13:56:50.048296 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 13:56:50.048314 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 13:56:50.048328 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 13:56:50.048350 kernel: signal: max sigframe size: 1776 Dec 13 13:56:50.048364 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:56:50.048378 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:56:50.048391 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 13:56:50.048404 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:56:50.048417 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:56:50.048430 kernel: .... node #0, CPUs: #1 Dec 13 13:56:50.048449 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 13:56:50.048502 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:56:50.048517 kernel: smpboot: Max logical packages: 16 Dec 13 13:56:50.048530 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 13:56:50.048543 kernel: devtmpfs: initialized Dec 13 13:56:50.048556 kernel: x86/mm: Memory block size: 128MB Dec 13 13:56:50.048569 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:56:50.048582 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 13:56:50.048596 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:56:50.048615 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:56:50.048629 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:56:50.048642 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:56:50.048655 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:56:50.048669 kernel: audit: type=2000 audit(1734098208.703:1): state=initialized audit_enabled=0 res=1 Dec 13 13:56:50.048682 kernel: cpuidle: using governor menu Dec 13 13:56:50.048695 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:56:50.048708 kernel: dca service started, version 1.12.1 Dec 13 13:56:50.048721 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 13:56:50.048739 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 13:56:50.048753 kernel: PCI: Using configuration type 1 for base access Dec 13 13:56:50.048819 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:56:50.048833 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:56:50.048846 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:56:50.048860 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:56:50.048873 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:56:50.048886 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:56:50.048899 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:56:50.048919 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:56:50.048933 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:56:50.048946 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:56:50.048959 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:56:50.048972 kernel: ACPI: Interpreter enabled Dec 13 13:56:50.048985 kernel: ACPI: PM: (supports S0 S5) Dec 13 13:56:50.048998 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:56:50.049011 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:56:50.049025 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:56:50.049043 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 13:56:50.049056 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:56:50.049327 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:56:50.049531 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:56:50.049722 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:56:50.049743 kernel: PCI host bridge to bus 0000:00 Dec 13 13:56:50.051987 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:56:50.052172 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:56:50.052339 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:56:50.052520 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 13:56:50.052682 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 13:56:50.052882 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 13:56:50.053047 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:56:50.053245 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 13:56:50.053448 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 13:56:50.053644 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 13:56:50.055923 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 13:56:50.056116 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 13:56:50.056296 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:56:50.056509 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 13:56:50.056697 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 13:56:50.056939 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 13:56:50.057116 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 13:56:50.057312 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 13:56:50.057501 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 13:56:50.057687 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 13:56:50.057881 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 13:56:50.058075 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 13:56:50.058255 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 13:56:50.058442 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 13:56:50.058635 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 13:56:50.062897 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 13:56:50.063100 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 13:56:50.063291 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 13:56:50.063479 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 13:56:50.063667 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:56:50.063871 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 13:56:50.064051 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 13:56:50.064228 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 13:56:50.064414 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 13:56:50.064621 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 13:56:50.064857 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 13:56:50.065033 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 13:56:50.065205 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 13:56:50.065398 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 13:56:50.065587 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 13:56:50.065806 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 13:56:50.065981 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 13:56:50.066153 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 13:56:50.066334 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 13:56:50.066570 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 13:56:50.067190 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 13:56:50.067390 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 13:56:50.067585 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 13:56:50.067775 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 13:56:50.067959 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 13:56:50.068153 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 13:56:50.068357 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 13:56:50.068573 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 13:56:50.068779 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 13:56:50.068962 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 13:56:50.069152 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 13:56:50.069329 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 13:56:50.069517 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 13:56:50.069692 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 13:56:50.069889 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 13:56:50.070093 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 13:56:50.070278 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 13:56:50.070489 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 13:56:50.070667 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 13:56:50.070894 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 13:56:50.071105 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 13:56:50.071297 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 13:56:50.071510 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 13:56:50.071691 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 13:56:50.071895 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 13:56:50.072071 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 13:56:50.072253 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 13:56:50.072442 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 13:56:50.072631 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 13:56:50.072912 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 13:56:50.073108 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 13:56:50.073281 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 13:56:50.073454 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 13:56:50.073638 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 13:56:50.073824 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 13:56:50.073845 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:56:50.073860 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:56:50.073874 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:56:50.073887 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:56:50.073908 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 13:56:50.073922 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 13:56:50.073935 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 13:56:50.073949 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 13:56:50.073962 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 13:56:50.073975 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 13:56:50.073988 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 13:56:50.074001 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 13:56:50.074014 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 13:56:50.074033 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 13:56:50.074046 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 13:56:50.074059 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 13:56:50.074073 kernel: iommu: Default domain type: Translated Dec 13 13:56:50.074086 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:56:50.074100 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:56:50.074113 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:56:50.074126 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 13:56:50.074139 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 13:56:50.074313 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 13:56:50.074497 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 13:56:50.074669 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:56:50.074691 kernel: vgaarb: loaded Dec 13 13:56:50.074705 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:56:50.074719 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:56:50.074733 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:56:50.074746 kernel: pnp: PnP ACPI init Dec 13 13:56:50.075009 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 13:56:50.075032 kernel: pnp: PnP ACPI: found 5 devices Dec 13 13:56:50.075046 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:56:50.075060 kernel: NET: Registered PF_INET protocol family Dec 13 13:56:50.075073 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:56:50.075087 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 13:56:50.075100 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:56:50.075114 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:56:50.075136 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 13:56:50.075149 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 13:56:50.075163 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:56:50.075176 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:56:50.075190 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:56:50.075203 kernel: NET: Registered PF_XDP protocol family Dec 13 13:56:50.075371 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 13:56:50.075558 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 13:56:50.075738 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 13:56:50.075942 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 13:56:50.076116 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 13:56:50.076288 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 13:56:50.076473 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 13:56:50.076649 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 13:56:50.076876 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 13:56:50.077050 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 13:56:50.077221 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 13:56:50.077392 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 13:56:50.077578 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 13:56:50.077750 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 13:56:50.077948 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 13:56:50.078121 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 13:56:50.078329 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 13:56:50.078528 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 13:56:50.078705 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 13:56:50.078910 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 13:56:50.079086 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 13:56:50.079259 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 13:56:50.079448 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 13:56:50.079636 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 13:56:50.079886 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 13:56:50.080061 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 13:56:50.080232 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 13:56:50.080404 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 13:56:50.080589 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 13:56:50.080788 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 13:56:50.080975 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 13:56:50.081147 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 13:56:50.081320 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 13:56:50.081567 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 13:56:50.081742 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 13:56:50.081946 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 13:56:50.082119 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 13:56:50.082291 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 13:56:50.082491 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 13:56:50.082674 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 13:56:50.082867 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 13:56:50.083042 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 13:56:50.083216 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 13:56:50.083390 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 13:56:50.083586 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 13:56:50.083810 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 13:56:50.083988 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 13:56:50.084159 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 13:56:50.084330 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 13:56:50.084515 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 13:56:50.084681 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:56:50.084854 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:56:50.085018 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:56:50.085185 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 13:56:50.085364 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 13:56:50.085535 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 13:56:50.085718 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 13:56:50.085928 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 13:56:50.086095 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 13:56:50.086271 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 13:56:50.086466 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 13:56:50.086650 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 13:56:50.086835 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 13:56:50.087013 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 13:56:50.087204 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 13:56:50.087371 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 13:56:50.087573 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 13:56:50.087742 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 13:56:50.087963 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 13:56:50.088148 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 13:56:50.088313 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 13:56:50.088490 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 13:56:50.088666 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 13:56:50.088856 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 13:56:50.089021 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 13:56:50.089206 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 13:56:50.089369 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 13:56:50.089546 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 13:56:50.089735 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 13:56:50.089933 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 13:56:50.090107 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 13:56:50.090130 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 13:56:50.090146 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:56:50.090167 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 13:56:50.090181 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 13:56:50.090196 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 13:56:50.090210 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 13:56:50.090224 kernel: Initialise system trusted keyrings Dec 13 13:56:50.090243 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 13:56:50.090257 kernel: Key type asymmetric registered Dec 13 13:56:50.090271 kernel: Asymmetric key parser 'x509' registered Dec 13 13:56:50.090284 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:56:50.090298 kernel: io scheduler mq-deadline registered Dec 13 13:56:50.090312 kernel: io scheduler kyber registered Dec 13 13:56:50.090326 kernel: io scheduler bfq registered Dec 13 13:56:50.090511 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 13:56:50.090689 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 13:56:50.090889 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:56:50.091068 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 13:56:50.091284 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 13:56:50.091476 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:56:50.091690 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 13:56:50.091928 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 13:56:50.092113 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:56:50.092296 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 13:56:50.092482 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 13:56:50.092659 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:56:50.092849 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 13:56:50.093032 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 13:56:50.093215 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:56:50.093466 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 13:56:50.093646 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 13:56:50.093854 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:56:50.094042 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 13:56:50.094276 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 13:56:50.094540 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:56:50.094823 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 13:56:50.095000 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 13:56:50.095176 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:56:50.095199 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:56:50.095221 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 13:56:50.095243 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 13:56:50.095258 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:56:50.095272 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:56:50.095286 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:56:50.095301 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:56:50.095315 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:56:50.095507 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 13:56:50.095531 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:56:50.095694 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 13:56:50.095887 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T13:56:49 UTC (1734098209) Dec 13 13:56:50.096054 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 13:56:50.096075 kernel: intel_pstate: CPU model not supported Dec 13 13:56:50.096089 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:56:50.096104 kernel: Segment Routing with IPv6 Dec 13 13:56:50.096118 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:56:50.096132 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:56:50.096146 kernel: Key type dns_resolver registered Dec 13 13:56:50.096167 kernel: IPI shorthand broadcast: enabled Dec 13 13:56:50.096182 kernel: sched_clock: Marking stable (1171003367, 239023359)->(1656402403, -246375677) Dec 13 13:56:50.096196 kernel: registered taskstats version 1 Dec 13 13:56:50.096210 kernel: Loading compiled-in X.509 certificates Dec 13 13:56:50.096228 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:56:50.096251 kernel: Key type .fscrypt registered Dec 13 13:56:50.096265 kernel: Key type fscrypt-provisioning registered Dec 13 13:56:50.096278 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:56:50.096293 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:56:50.096312 kernel: ima: No architecture policies found Dec 13 13:56:50.096325 kernel: clk: Disabling unused clocks Dec 13 13:56:50.096346 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:56:50.096360 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:56:50.096374 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:56:50.096387 kernel: Run /init as init process Dec 13 13:56:50.096401 kernel: with arguments: Dec 13 13:56:50.096415 kernel: /init Dec 13 13:56:50.096429 kernel: with environment: Dec 13 13:56:50.096447 kernel: HOME=/ Dec 13 13:56:50.096473 kernel: TERM=linux Dec 13 13:56:50.096489 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:56:50.096515 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:56:50.096535 systemd[1]: Detected virtualization kvm. Dec 13 13:56:50.096550 systemd[1]: Detected architecture x86-64. Dec 13 13:56:50.096565 systemd[1]: Running in initrd. Dec 13 13:56:50.096579 systemd[1]: No hostname configured, using default hostname. Dec 13 13:56:50.096607 systemd[1]: Hostname set to . Dec 13 13:56:50.096623 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:56:50.096638 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:56:50.096653 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:56:50.096669 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:56:50.096685 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:56:50.096700 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:56:50.096716 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:56:50.096736 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:56:50.096753 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:56:50.096791 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:56:50.096806 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:56:50.096822 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:56:50.096836 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:56:50.096858 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:56:50.096873 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:56:50.096888 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:56:50.096903 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:56:50.096918 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:56:50.096933 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:56:50.096949 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:56:50.096964 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:56:50.096979 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:56:50.096999 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:56:50.097014 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:56:50.097029 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:56:50.097044 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:56:50.097059 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:56:50.097074 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:56:50.097089 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:56:50.097104 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:56:50.097119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:56:50.097140 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:56:50.097155 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:56:50.097216 systemd-journald[202]: Collecting audit messages is disabled. Dec 13 13:56:50.097252 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:56:50.097275 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:56:50.097298 systemd-journald[202]: Journal started Dec 13 13:56:50.097332 systemd-journald[202]: Runtime Journal (/run/log/journal/3b0d576d18eb455d865e0ac68cf73acd) is 4.7M, max 37.9M, 33.2M free. Dec 13 13:56:50.071227 systemd-modules-load[203]: Inserted module 'overlay' Dec 13 13:56:50.153425 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:56:50.153521 kernel: Bridge firewalling registered Dec 13 13:56:50.115794 systemd-modules-load[203]: Inserted module 'br_netfilter' Dec 13 13:56:50.159782 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:56:50.161180 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:56:50.162198 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:56:50.166948 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:56:50.173980 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:56:50.177939 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:56:50.181936 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:56:50.184475 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:56:50.208734 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:56:50.213091 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:56:50.215088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:56:50.216041 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:56:50.221987 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:56:50.224956 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:56:50.245236 dracut-cmdline[236]: dracut-dracut-053 Dec 13 13:56:50.249856 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:56:50.280847 systemd-resolved[237]: Positive Trust Anchors: Dec 13 13:56:50.280891 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:56:50.280936 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:56:50.290833 systemd-resolved[237]: Defaulting to hostname 'linux'. Dec 13 13:56:50.292853 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:56:50.295357 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:56:50.367810 kernel: SCSI subsystem initialized Dec 13 13:56:50.378788 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:56:50.392821 kernel: iscsi: registered transport (tcp) Dec 13 13:56:50.420363 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:56:50.420481 kernel: QLogic iSCSI HBA Driver Dec 13 13:56:50.478746 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:56:50.484966 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:56:50.522460 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:56:50.522592 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:56:50.525207 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:56:50.578826 kernel: raid6: sse2x4 gen() 7587 MB/s Dec 13 13:56:50.596813 kernel: raid6: sse2x2 gen() 5339 MB/s Dec 13 13:56:50.615499 kernel: raid6: sse2x1 gen() 5198 MB/s Dec 13 13:56:50.615567 kernel: raid6: using algorithm sse2x4 gen() 7587 MB/s Dec 13 13:56:50.634507 kernel: raid6: .... xor() 4835 MB/s, rmw enabled Dec 13 13:56:50.634625 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 13:56:50.661810 kernel: xor: automatically using best checksumming function avx Dec 13 13:56:50.839969 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:56:50.855743 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:56:50.863111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:56:50.895543 systemd-udevd[420]: Using default interface naming scheme 'v255'. Dec 13 13:56:50.903470 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:56:50.913000 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:56:50.936927 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Dec 13 13:56:50.979009 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:56:50.988965 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:56:51.104111 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:56:51.113938 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:56:51.154902 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:56:51.158750 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:56:51.159536 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:56:51.163891 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:56:51.171537 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:56:51.200592 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:56:51.233809 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 13 13:56:51.323934 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:56:51.323964 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 13:56:51.324273 kernel: ACPI: bus type USB registered Dec 13 13:56:51.324306 kernel: AVX version of gcm_enc/dec engaged. Dec 13 13:56:51.324326 kernel: usbcore: registered new interface driver usbfs Dec 13 13:56:51.324345 kernel: AES CTR mode by8 optimization enabled Dec 13 13:56:51.324363 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:56:51.324382 kernel: usbcore: registered new interface driver hub Dec 13 13:56:51.324400 kernel: GPT:17805311 != 125829119 Dec 13 13:56:51.324417 kernel: usbcore: registered new device driver usb Dec 13 13:56:51.324450 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:56:51.324471 kernel: GPT:17805311 != 125829119 Dec 13 13:56:51.324506 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:56:51.324525 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:56:51.324543 kernel: libata version 3.00 loaded. Dec 13 13:56:51.296549 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:56:51.296727 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:56:51.312183 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:56:51.312948 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:56:51.313190 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:56:51.314973 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:56:51.323046 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:56:51.370062 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 13:56:51.402647 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 13:56:51.402680 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 13:56:51.402950 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 13:56:51.403199 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (468) Dec 13 13:56:51.403221 kernel: scsi host0: ahci Dec 13 13:56:51.404198 kernel: scsi host1: ahci Dec 13 13:56:51.404447 kernel: scsi host2: ahci Dec 13 13:56:51.404654 kernel: scsi host3: ahci Dec 13 13:56:51.405732 kernel: scsi host4: ahci Dec 13 13:56:51.406207 kernel: scsi host5: ahci Dec 13 13:56:51.406415 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Dec 13 13:56:51.406452 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Dec 13 13:56:51.406473 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Dec 13 13:56:51.406491 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Dec 13 13:56:51.406510 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Dec 13 13:56:51.406528 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Dec 13 13:56:51.413327 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (465) Dec 13 13:56:51.439754 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:56:51.505620 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:56:51.513865 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:56:51.520296 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:56:51.521136 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:56:51.529622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:56:51.545028 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:56:51.549772 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:56:51.556053 disk-uuid[562]: Primary Header is updated. Dec 13 13:56:51.556053 disk-uuid[562]: Secondary Entries is updated. Dec 13 13:56:51.556053 disk-uuid[562]: Secondary Header is updated. Dec 13 13:56:51.561807 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:56:51.580386 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:56:51.716839 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 13:56:51.716960 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 13:56:51.718622 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 13:56:51.723803 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 13:56:51.728134 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 13:56:51.728172 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 13:56:51.736011 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 13:56:51.755512 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 13:56:51.756148 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 13:56:51.756382 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 13:56:51.756616 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 13:56:51.756850 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 13:56:51.757076 kernel: hub 1-0:1.0: USB hub found Dec 13 13:56:51.758289 kernel: hub 1-0:1.0: 4 ports detected Dec 13 13:56:51.758528 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 13:56:51.758880 kernel: hub 2-0:1.0: USB hub found Dec 13 13:56:51.759112 kernel: hub 2-0:1.0: 4 ports detected Dec 13 13:56:51.990847 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 13:56:52.132792 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:56:52.140129 kernel: usbcore: registered new interface driver usbhid Dec 13 13:56:52.140184 kernel: usbhid: USB HID core driver Dec 13 13:56:52.148200 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 13:56:52.148264 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 13:56:52.573973 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:56:52.575070 disk-uuid[563]: The operation has completed successfully. Dec 13 13:56:52.631764 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:56:52.631968 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:56:52.658132 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:56:52.662254 sh[583]: Success Dec 13 13:56:52.678853 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 13:56:52.755460 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:56:52.757892 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:56:52.761032 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:56:52.789215 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:56:52.789320 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:56:52.791357 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:56:52.794734 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:56:52.794786 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:56:52.806620 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:56:52.808269 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:56:52.814976 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:56:52.817941 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:56:52.838787 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:56:52.838879 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:56:52.838901 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:56:52.843786 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:56:52.859276 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:56:52.861906 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:56:52.872562 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:56:52.879442 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:56:52.979847 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:56:52.995290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:56:53.031483 systemd-networkd[767]: lo: Link UP Dec 13 13:56:53.032523 systemd-networkd[767]: lo: Gained carrier Dec 13 13:56:53.036358 systemd-networkd[767]: Enumeration completed Dec 13 13:56:53.037353 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:56:53.038394 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:56:53.038620 ignition[690]: Ignition 2.20.0 Dec 13 13:56:53.038409 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:56:53.038638 ignition[690]: Stage: fetch-offline Dec 13 13:56:53.041430 systemd[1]: Reached target network.target - Network. Dec 13 13:56:53.038734 ignition[690]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:56:53.042558 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:56:53.038767 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:56:53.043221 systemd-networkd[767]: eth0: Link UP Dec 13 13:56:53.039608 ignition[690]: parsed url from cmdline: "" Dec 13 13:56:53.043227 systemd-networkd[767]: eth0: Gained carrier Dec 13 13:56:53.039616 ignition[690]: no config URL provided Dec 13 13:56:53.043240 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:56:53.039626 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:56:53.051971 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:56:53.039644 ignition[690]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:56:53.039654 ignition[690]: failed to fetch config: resource requires networking Dec 13 13:56:53.039949 ignition[690]: Ignition finished successfully Dec 13 13:56:53.063896 systemd-networkd[767]: eth0: DHCPv4 address 10.244.15.30/30, gateway 10.244.15.29 acquired from 10.244.15.29 Dec 13 13:56:53.074466 ignition[775]: Ignition 2.20.0 Dec 13 13:56:53.074507 ignition[775]: Stage: fetch Dec 13 13:56:53.074864 ignition[775]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:56:53.074886 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:56:53.075071 ignition[775]: parsed url from cmdline: "" Dec 13 13:56:53.075089 ignition[775]: no config URL provided Dec 13 13:56:53.075100 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:56:53.075118 ignition[775]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:56:53.075320 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 13:56:53.075628 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 13:56:53.075680 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 13:56:53.090166 ignition[775]: GET result: OK Dec 13 13:56:53.090322 ignition[775]: parsing config with SHA512: c9b21420f0d3f3705ac613acb4c13677dd4da6f03dd2ba80fbeec76ceff9d7f1e416a1a2917ae03610f6219255172952d71c36d1a79c545bc0e09f7a25bffa7c Dec 13 13:56:53.098639 unknown[775]: fetched base config from "system" Dec 13 13:56:53.099253 ignition[775]: fetch: fetch complete Dec 13 13:56:53.098659 unknown[775]: fetched base config from "system" Dec 13 13:56:53.099264 ignition[775]: fetch: fetch passed Dec 13 13:56:53.098674 unknown[775]: fetched user config from "openstack" Dec 13 13:56:53.099336 ignition[775]: Ignition finished successfully Dec 13 13:56:53.104512 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:56:53.115933 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:56:53.132750 ignition[783]: Ignition 2.20.0 Dec 13 13:56:53.132790 ignition[783]: Stage: kargs Dec 13 13:56:53.133043 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:56:53.136458 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:56:53.133063 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:56:53.134250 ignition[783]: kargs: kargs passed Dec 13 13:56:53.134326 ignition[783]: Ignition finished successfully Dec 13 13:56:53.144986 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:56:53.163476 ignition[789]: Ignition 2.20.0 Dec 13 13:56:53.163500 ignition[789]: Stage: disks Dec 13 13:56:53.163731 ignition[789]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:56:53.163751 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:56:53.166136 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:56:53.164932 ignition[789]: disks: disks passed Dec 13 13:56:53.167672 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:56:53.165004 ignition[789]: Ignition finished successfully Dec 13 13:56:53.168751 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:56:53.170301 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:56:53.171531 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:56:53.173142 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:56:53.186982 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:56:53.208370 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 13:56:53.212015 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:56:53.218896 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:56:53.332797 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:56:53.334386 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:56:53.335799 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:56:53.341882 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:56:53.344909 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:56:53.347523 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:56:53.350007 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 13:56:53.353730 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:56:53.355247 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:56:53.363238 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:56:53.365326 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (805) Dec 13 13:56:53.365358 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:56:53.365377 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:56:53.365413 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:56:53.367784 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:56:53.375360 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:56:53.389067 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:56:53.458594 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:56:53.466814 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:56:53.474133 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:56:53.483343 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:56:53.604707 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:56:53.611899 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:56:53.613985 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:56:53.630798 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:56:53.653730 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:56:53.661815 ignition[923]: INFO : Ignition 2.20.0 Dec 13 13:56:53.661815 ignition[923]: INFO : Stage: mount Dec 13 13:56:53.661815 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:56:53.661815 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:56:53.666117 ignition[923]: INFO : mount: mount passed Dec 13 13:56:53.666117 ignition[923]: INFO : Ignition finished successfully Dec 13 13:56:53.664361 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:56:53.787625 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:56:54.203196 systemd-networkd[767]: eth0: Gained IPv6LL Dec 13 13:56:55.712645 systemd-networkd[767]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:3c7:24:19ff:fef4:f1e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:3c7:24:19ff:fef4:f1e/64 assigned by NDisc. Dec 13 13:56:55.712664 systemd-networkd[767]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 13:57:00.542231 coreos-metadata[807]: Dec 13 13:57:00.542 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:57:00.567205 coreos-metadata[807]: Dec 13 13:57:00.567 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 13:57:00.587471 coreos-metadata[807]: Dec 13 13:57:00.587 INFO Fetch successful Dec 13 13:57:00.588392 coreos-metadata[807]: Dec 13 13:57:00.587 INFO wrote hostname srv-3exgq.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 13:57:00.590489 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 13:57:00.590745 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 13:57:00.598919 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:57:00.621987 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:57:00.636820 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Dec 13 13:57:00.643815 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:57:00.643867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:57:00.643888 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:57:00.649797 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:57:00.653193 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:57:00.683188 ignition[957]: INFO : Ignition 2.20.0 Dec 13 13:57:00.683188 ignition[957]: INFO : Stage: files Dec 13 13:57:00.685076 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:57:00.685076 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:57:00.685076 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:57:00.692503 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:57:00.692503 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:57:00.696156 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:57:00.697196 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:57:00.697196 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:57:00.696908 unknown[957]: wrote ssh authorized keys file for user: core Dec 13 13:57:00.700357 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:57:00.700357 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:57:00.903186 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:57:01.476031 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:57:01.476031 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:57:01.482604 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 13:57:02.069350 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:57:02.444700 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:57:02.444700 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:57:02.448937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:57:02.448937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:57:02.448937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:57:02.448937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:57:02.448937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:57:02.448937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:57:02.448937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:57:02.456798 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:57:02.456798 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:57:02.456798 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:57:02.456798 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:57:02.456798 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:57:02.456798 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 13:57:02.924726 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 13:57:04.621791 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:57:04.621791 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:57:04.629608 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:57:04.631080 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:57:04.631080 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:57:04.631080 ignition[957]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:57:04.635263 ignition[957]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:57:04.635263 ignition[957]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:57:04.635263 ignition[957]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:57:04.635263 ignition[957]: INFO : files: files passed Dec 13 13:57:04.635263 ignition[957]: INFO : Ignition finished successfully Dec 13 13:57:04.635961 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:57:04.647098 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:57:04.652022 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:57:04.662233 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:57:04.662407 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:57:04.674565 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:57:04.674565 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:57:04.678089 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:57:04.680223 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:57:04.681883 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:57:04.688014 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:57:04.729528 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:57:04.729720 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:57:04.732003 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:57:04.733284 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:57:04.734890 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:57:04.739959 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:57:04.760154 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:57:04.768012 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:57:04.783658 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:57:04.784573 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:57:04.786472 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:57:04.787957 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:57:04.788130 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:57:04.789962 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:57:04.790944 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:57:04.792374 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:57:04.793676 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:57:04.795138 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:57:04.796648 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:57:04.798177 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:57:04.799874 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:57:04.801349 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:57:04.802957 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:57:04.804314 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:57:04.804482 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:57:04.806155 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:57:04.807101 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:57:04.808450 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:57:04.810834 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:57:04.812062 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:57:04.812327 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:57:04.814067 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:57:04.814259 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:57:04.815897 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:57:04.816067 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:57:04.833724 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:57:04.834569 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:57:04.834821 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:57:04.838148 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:57:04.841974 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:57:04.842288 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:57:04.844245 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:57:04.844927 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:57:04.861128 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:57:04.861300 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:57:04.867707 ignition[1010]: INFO : Ignition 2.20.0 Dec 13 13:57:04.867707 ignition[1010]: INFO : Stage: umount Dec 13 13:57:04.870504 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:57:04.870504 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:57:04.870504 ignition[1010]: INFO : umount: umount passed Dec 13 13:57:04.870504 ignition[1010]: INFO : Ignition finished successfully Dec 13 13:57:04.874242 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:57:04.874416 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:57:04.876430 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:57:04.876583 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:57:04.878313 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:57:04.878393 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:57:04.879683 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:57:04.879780 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:57:04.882199 systemd[1]: Stopped target network.target - Network. Dec 13 13:57:04.883427 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:57:04.883512 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:57:04.884279 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:57:04.884892 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:57:04.888836 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:57:04.890162 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:57:04.891603 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:57:04.893032 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:57:04.893127 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:57:04.894482 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:57:04.894555 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:57:04.896182 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:57:04.896288 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:57:04.898388 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:57:04.898474 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:57:04.900019 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:57:04.902296 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:57:04.903953 systemd-networkd[767]: eth0: DHCPv6 lease lost Dec 13 13:57:04.907801 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:57:04.908674 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:57:04.908883 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:57:04.911393 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:57:04.911490 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:57:04.929443 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:57:04.930167 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:57:04.930271 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:57:04.931516 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:57:04.933506 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:57:04.933666 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:57:04.947822 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:57:04.948096 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:57:04.951550 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:57:04.951716 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:57:04.956338 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:57:04.956442 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:57:04.958243 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:57:04.958311 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:57:04.959756 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:57:04.959959 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:57:04.962118 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:57:04.962215 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:57:04.963092 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:57:04.963163 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:57:04.970996 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:57:04.971817 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:57:04.971907 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:57:04.975175 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:57:04.975260 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:57:04.978512 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:57:04.978587 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:57:04.979403 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:57:04.979472 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:57:04.981884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:57:04.981955 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:57:04.983455 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:57:04.983613 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:57:05.057482 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:57:05.057672 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:57:05.059427 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:57:05.060428 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:57:05.060504 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:57:05.076060 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:57:05.086543 systemd[1]: Switching root. Dec 13 13:57:05.126387 systemd-journald[202]: Journal stopped Dec 13 13:57:06.578947 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Dec 13 13:57:06.579037 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:57:06.579063 kernel: SELinux: policy capability open_perms=1 Dec 13 13:57:06.579100 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:57:06.579122 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:57:06.579150 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:57:06.579231 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:57:06.579264 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:57:06.579285 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:57:06.579306 kernel: audit: type=1403 audit(1734098225.355:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:57:06.579328 systemd[1]: Successfully loaded SELinux policy in 52.082ms. Dec 13 13:57:06.579362 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.127ms. Dec 13 13:57:06.579403 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:57:06.579427 systemd[1]: Detected virtualization kvm. Dec 13 13:57:06.579449 systemd[1]: Detected architecture x86-64. Dec 13 13:57:06.579470 systemd[1]: Detected first boot. Dec 13 13:57:06.579491 systemd[1]: Hostname set to . Dec 13 13:57:06.579512 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:57:06.579534 zram_generator::config[1053]: No configuration found. Dec 13 13:57:06.579564 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:57:06.579599 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:57:06.579629 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:57:06.579665 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:57:06.579689 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:57:06.579711 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:57:06.579734 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:57:06.581820 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:57:06.581873 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:57:06.581915 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:57:06.581940 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:57:06.581963 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:57:06.581984 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:57:06.582006 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:57:06.582028 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:57:06.582049 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:57:06.582078 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:57:06.582100 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:57:06.582136 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:57:06.582172 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:57:06.582196 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:57:06.582217 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:57:06.582240 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:57:06.582261 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:57:06.582295 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:57:06.582320 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:57:06.582342 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:57:06.582363 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:57:06.582384 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:57:06.582406 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:57:06.582428 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:57:06.582479 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:57:06.582504 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:57:06.582526 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:57:06.582547 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:57:06.582575 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:57:06.582597 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:57:06.582624 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:57:06.582645 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:57:06.582680 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:57:06.582704 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:57:06.582726 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:57:06.582748 systemd[1]: Reached target machines.target - Containers. Dec 13 13:57:06.582789 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:57:06.582813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:57:06.582835 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:57:06.582857 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:57:06.582887 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:57:06.582924 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:57:06.582955 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:57:06.582977 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:57:06.582999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:57:06.583021 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:57:06.583042 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:57:06.583064 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:57:06.583086 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:57:06.583120 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:57:06.583143 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:57:06.583190 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:57:06.583214 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:57:06.583236 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:57:06.583257 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:57:06.583278 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:57:06.583300 systemd[1]: Stopped verity-setup.service. Dec 13 13:57:06.583320 kernel: loop: module loaded Dec 13 13:57:06.583355 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:57:06.583380 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:57:06.583402 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:57:06.583424 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:57:06.583445 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:57:06.583479 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:57:06.583544 systemd-journald[1156]: Collecting audit messages is disabled. Dec 13 13:57:06.583583 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:57:06.583605 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:57:06.583627 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:57:06.583655 systemd-journald[1156]: Journal started Dec 13 13:57:06.583710 systemd-journald[1156]: Runtime Journal (/run/log/journal/3b0d576d18eb455d865e0ac68cf73acd) is 4.7M, max 37.9M, 33.2M free. Dec 13 13:57:06.592381 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:57:06.592463 kernel: fuse: init (API version 7.39) Dec 13 13:57:06.592504 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:57:06.592534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:57:06.592561 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:57:06.188553 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:57:06.207676 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:57:06.208496 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:57:06.598812 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:57:06.598688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:57:06.598999 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:57:06.600753 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:57:06.601091 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:57:06.602403 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:57:06.602601 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:57:06.603958 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:57:06.605129 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:57:06.606560 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:57:06.623238 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:57:06.632852 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:57:06.654895 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:57:06.656883 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:57:06.656941 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:57:06.659330 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:57:06.662801 kernel: ACPI: bus type drm_connector registered Dec 13 13:57:06.671990 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:57:06.679924 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:57:06.680926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:57:06.684973 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:57:06.690074 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:57:06.691819 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:57:06.693900 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:57:06.695883 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:57:06.706000 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:57:06.710914 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:57:06.715039 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:57:06.721334 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:57:06.722880 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:57:06.724954 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:57:06.725984 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:57:06.728258 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:57:06.758912 systemd-journald[1156]: Time spent on flushing to /var/log/journal/3b0d576d18eb455d865e0ac68cf73acd is 67.791ms for 1141 entries. Dec 13 13:57:06.758912 systemd-journald[1156]: System Journal (/var/log/journal/3b0d576d18eb455d865e0ac68cf73acd) is 8.0M, max 584.8M, 576.8M free. Dec 13 13:57:06.866739 systemd-journald[1156]: Received client request to flush runtime journal. Dec 13 13:57:06.867576 kernel: loop0: detected capacity change from 0 to 138184 Dec 13 13:57:06.829887 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:57:06.831121 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:57:06.839992 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:57:06.870118 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:57:06.877997 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:57:06.879289 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:57:06.885272 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:57:06.903868 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:57:06.927844 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:57:06.945961 kernel: loop1: detected capacity change from 0 to 141000 Dec 13 13:57:06.943033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:57:07.012539 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:57:07.030442 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 13:57:07.020031 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:57:07.031369 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Dec 13 13:57:07.031397 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Dec 13 13:57:07.055227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:57:07.070864 udevadm[1208]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:57:07.083804 kernel: loop3: detected capacity change from 0 to 8 Dec 13 13:57:07.123095 kernel: loop4: detected capacity change from 0 to 138184 Dec 13 13:57:07.164890 kernel: loop5: detected capacity change from 0 to 141000 Dec 13 13:57:07.211903 kernel: loop6: detected capacity change from 0 to 211296 Dec 13 13:57:07.252803 kernel: loop7: detected capacity change from 0 to 8 Dec 13 13:57:07.262921 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 13:57:07.268985 (sd-merge)[1212]: Merged extensions into '/usr'. Dec 13 13:57:07.279576 systemd[1]: Reloading requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:57:07.280651 systemd[1]: Reloading... Dec 13 13:57:07.402027 zram_generator::config[1235]: No configuration found. Dec 13 13:57:07.563850 ldconfig[1180]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:57:07.656859 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:57:07.731567 systemd[1]: Reloading finished in 449 ms. Dec 13 13:57:07.776197 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:57:07.778315 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:57:07.792073 systemd[1]: Starting ensure-sysext.service... Dec 13 13:57:07.797569 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:57:07.814937 systemd[1]: Reloading requested from client PID 1294 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:57:07.814961 systemd[1]: Reloading... Dec 13 13:57:07.840502 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:57:07.841616 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:57:07.843258 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:57:07.843909 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Dec 13 13:57:07.844218 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Dec 13 13:57:07.851986 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:57:07.852184 systemd-tmpfiles[1295]: Skipping /boot Dec 13 13:57:07.878544 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:57:07.878734 systemd-tmpfiles[1295]: Skipping /boot Dec 13 13:57:07.907844 zram_generator::config[1320]: No configuration found. Dec 13 13:57:08.103692 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:57:08.176009 systemd[1]: Reloading finished in 360 ms. Dec 13 13:57:08.198829 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:57:08.206501 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:57:08.226088 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:57:08.232496 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:57:08.238405 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:57:08.252167 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:57:08.258598 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:57:08.268967 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:57:08.283241 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:57:08.288961 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:57:08.289275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:57:08.293643 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:57:08.303873 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:57:08.309834 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:57:08.312072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:57:08.312260 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:57:08.318262 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:57:08.318551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:57:08.318817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:57:08.318954 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:57:08.325509 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:57:08.327120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:57:08.340887 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:57:08.341887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:57:08.342104 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:57:08.343705 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:57:08.357120 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:57:08.367247 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:57:08.369102 systemd[1]: Finished ensure-sysext.service. Dec 13 13:57:08.369550 systemd-udevd[1384]: Using default interface naming scheme 'v255'. Dec 13 13:57:08.390153 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:57:08.391118 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:57:08.392615 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:57:08.406438 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:57:08.406699 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:57:08.410052 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:57:08.410421 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:57:08.419325 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:57:08.420647 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:57:08.420920 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:57:08.423373 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:57:08.423874 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:57:08.427374 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:57:08.428044 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:57:08.439590 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:57:08.448699 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:57:08.480364 augenrules[1431]: No rules Dec 13 13:57:08.482256 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:57:08.483021 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:57:08.488848 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:57:08.602963 systemd-networkd[1417]: lo: Link UP Dec 13 13:57:08.602976 systemd-networkd[1417]: lo: Gained carrier Dec 13 13:57:08.606141 systemd-networkd[1417]: Enumeration completed Dec 13 13:57:08.607936 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:57:08.619995 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:57:08.642862 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:57:08.643801 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:57:08.670001 systemd-resolved[1382]: Positive Trust Anchors: Dec 13 13:57:08.670044 systemd-resolved[1382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:57:08.670099 systemd-resolved[1382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:57:08.679366 systemd-resolved[1382]: Using system hostname 'srv-3exgq.gb1.brightbox.com'. Dec 13 13:57:08.682888 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:57:08.687326 systemd[1]: Reached target network.target - Network. Dec 13 13:57:08.687791 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1429) Dec 13 13:57:08.688034 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:57:08.690043 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:57:08.703053 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1429) Dec 13 13:57:08.723813 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1424) Dec 13 13:57:08.765493 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:57:08.776176 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:57:08.804710 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:57:08.805009 systemd-networkd[1417]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:57:08.809530 systemd-networkd[1417]: eth0: Link UP Dec 13 13:57:08.809669 systemd-networkd[1417]: eth0: Gained carrier Dec 13 13:57:08.809807 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:57:08.813777 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:57:08.845985 systemd-networkd[1417]: eth0: DHCPv4 address 10.244.15.30/30, gateway 10.244.15.29 acquired from 10.244.15.29 Dec 13 13:57:08.847678 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Dec 13 13:57:08.854789 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:57:08.870816 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 13:57:08.890798 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:57:08.930841 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 13:57:08.940009 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 13:57:08.940366 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 13:57:08.940599 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 13:57:08.983973 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:57:09.170335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:57:09.187638 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:57:09.195098 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:57:09.222796 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:57:09.254430 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:57:09.256275 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:57:09.257156 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:57:09.258210 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:57:09.259067 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:57:09.260399 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:57:09.261283 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:57:09.262094 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:57:09.262875 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:57:09.262931 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:57:09.263566 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:57:09.266166 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:57:09.269247 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:57:09.276284 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:57:09.279199 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:57:09.281088 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:57:09.282045 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:57:09.282886 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:57:09.283690 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:57:09.283753 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:57:09.293065 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:57:09.299016 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 13:57:09.302785 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:57:09.304062 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:57:09.312934 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:57:09.318751 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:57:09.319565 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:57:09.325966 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:57:09.332074 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:57:09.340782 jq[1479]: false Dec 13 13:57:09.342019 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:57:09.347380 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:57:09.363907 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:57:09.365604 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:57:09.367948 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:57:09.376009 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:57:09.379415 dbus-daemon[1478]: [system] SELinux support is enabled Dec 13 13:57:10.132580 systemd-resolved[1382]: Clock change detected. Flushing caches. Dec 13 13:57:10.132932 systemd-timesyncd[1403]: Contacted time server 131.111.8.61:123 (0.flatcar.pool.ntp.org). Dec 13 13:57:10.133163 systemd-timesyncd[1403]: Initial clock synchronization to Fri 2024-12-13 13:57:10.132498 UTC. Dec 13 13:57:10.137536 dbus-daemon[1478]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1417 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 13:57:10.137576 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:57:10.141997 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:57:10.153595 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:57:10.153877 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:57:10.161284 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:57:10.162932 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:57:10.165982 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 13:57:10.184498 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:57:10.184566 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:57:10.195808 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:57:10.214042 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:57:10.224551 jq[1490]: true Dec 13 13:57:10.215175 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:57:10.239373 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 13:57:10.271004 tar[1492]: linux-amd64/helm Dec 13 13:57:10.286530 update_engine[1488]: I20241213 13:57:10.286281 1488 main.cc:92] Flatcar Update Engine starting Dec 13 13:57:10.293196 extend-filesystems[1480]: Found loop4 Dec 13 13:57:10.295241 extend-filesystems[1480]: Found loop5 Dec 13 13:57:10.295241 extend-filesystems[1480]: Found loop6 Dec 13 13:57:10.295241 extend-filesystems[1480]: Found loop7 Dec 13 13:57:10.295241 extend-filesystems[1480]: Found vda Dec 13 13:57:10.295241 extend-filesystems[1480]: Found vda1 Dec 13 13:57:10.295241 extend-filesystems[1480]: Found vda2 Dec 13 13:57:10.295241 extend-filesystems[1480]: Found vda3 Dec 13 13:57:10.295241 extend-filesystems[1480]: Found usr Dec 13 13:57:10.295241 extend-filesystems[1480]: Found vda4 Dec 13 13:57:10.295241 extend-filesystems[1480]: Found vda6 Dec 13 13:57:10.295241 extend-filesystems[1480]: Found vda7 Dec 13 13:57:10.295241 extend-filesystems[1480]: Found vda9 Dec 13 13:57:10.295241 extend-filesystems[1480]: Checking size of /dev/vda9 Dec 13 13:57:10.342723 jq[1504]: true Dec 13 13:57:10.299084 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:57:10.343158 update_engine[1488]: I20241213 13:57:10.299597 1488 update_check_scheduler.cc:74] Next update check in 5m56s Dec 13 13:57:10.305004 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:57:10.311561 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:57:10.355000 extend-filesystems[1480]: Resized partition /dev/vda9 Dec 13 13:57:10.352106 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:57:10.352466 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:57:10.374323 extend-filesystems[1521]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:57:10.388448 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 13:57:10.486696 systemd-logind[1486]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 13:57:10.486743 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:57:10.488833 systemd-logind[1486]: New seat seat0. Dec 13 13:57:10.524082 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1426) Dec 13 13:57:10.491946 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:57:10.599605 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 13:57:10.614876 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:57:10.609733 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 13:57:10.605021 dbus-daemon[1478]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1503 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 13:57:10.624721 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 13:57:10.626136 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:57:10.646564 systemd[1]: Starting sshkeys.service... Dec 13 13:57:10.663114 polkitd[1541]: Started polkitd version 121 Dec 13 13:57:10.686156 polkitd[1541]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 13:57:10.686271 polkitd[1541]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 13:57:10.693920 polkitd[1541]: Finished loading, compiling and executing 2 rules Dec 13 13:57:10.696437 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 13:57:10.697065 polkitd[1541]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 13:57:10.697551 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 13:57:10.725047 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 13:57:10.738179 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 13:57:10.760747 systemd-hostnamed[1503]: Hostname set to (static) Dec 13 13:57:10.796349 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:57:10.805557 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 13:57:10.838329 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:57:10.838329 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 13:57:10.838329 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 13:57:10.846412 extend-filesystems[1480]: Resized filesystem in /dev/vda9 Dec 13 13:57:10.839770 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:57:10.840738 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:57:10.930956 containerd[1508]: time="2024-12-13T13:57:10.930773785Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:57:10.953800 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:57:10.981896 containerd[1508]: time="2024-12-13T13:57:10.981780030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:57:10.985906 containerd[1508]: time="2024-12-13T13:57:10.984990558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:57:10.985906 containerd[1508]: time="2024-12-13T13:57:10.985033766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:57:10.985906 containerd[1508]: time="2024-12-13T13:57:10.985067139Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:57:10.985906 containerd[1508]: time="2024-12-13T13:57:10.985419731Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:57:10.985906 containerd[1508]: time="2024-12-13T13:57:10.985456655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:57:10.985906 containerd[1508]: time="2024-12-13T13:57:10.985577635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:57:10.985906 containerd[1508]: time="2024-12-13T13:57:10.985601235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:57:10.985906 containerd[1508]: time="2024-12-13T13:57:10.985859821Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:57:10.985906 containerd[1508]: time="2024-12-13T13:57:10.985884193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:57:10.985906 containerd[1508]: time="2024-12-13T13:57:10.985906229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:57:10.986310 containerd[1508]: time="2024-12-13T13:57:10.985922588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:57:10.986310 containerd[1508]: time="2024-12-13T13:57:10.986046137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:57:10.986746 containerd[1508]: time="2024-12-13T13:57:10.986441937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:57:10.986746 containerd[1508]: time="2024-12-13T13:57:10.986607049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:57:10.986746 containerd[1508]: time="2024-12-13T13:57:10.986646634Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:57:10.986860 containerd[1508]: time="2024-12-13T13:57:10.986810987Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:57:10.988319 containerd[1508]: time="2024-12-13T13:57:10.986893270Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:57:10.992074 containerd[1508]: time="2024-12-13T13:57:10.991984674Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:57:10.992074 containerd[1508]: time="2024-12-13T13:57:10.992058108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:57:10.992170 containerd[1508]: time="2024-12-13T13:57:10.992087306Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:57:10.992170 containerd[1508]: time="2024-12-13T13:57:10.992112196Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:57:10.992170 containerd[1508]: time="2024-12-13T13:57:10.992134238Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:57:10.992466 containerd[1508]: time="2024-12-13T13:57:10.992393200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.993525938Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.993740211Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.993768266Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.993792167Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.993814934Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.993840496Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.993862056Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.993884655Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.993906748Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.993936500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.993958785Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.993978160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.994018178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.994317 containerd[1508]: time="2024-12-13T13:57:10.994043279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.994861 containerd[1508]: time="2024-12-13T13:57:10.994070851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.994861 containerd[1508]: time="2024-12-13T13:57:10.994092287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.994861 containerd[1508]: time="2024-12-13T13:57:10.994112481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.994861 containerd[1508]: time="2024-12-13T13:57:10.994136969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.994861 containerd[1508]: time="2024-12-13T13:57:10.994157761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.994861 containerd[1508]: time="2024-12-13T13:57:10.994177510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.994861 containerd[1508]: time="2024-12-13T13:57:10.994197040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.994861 containerd[1508]: time="2024-12-13T13:57:10.994218801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.994861 containerd[1508]: time="2024-12-13T13:57:10.994250898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.994861 containerd[1508]: time="2024-12-13T13:57:10.994274444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.995926 containerd[1508]: time="2024-12-13T13:57:10.995879279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.996032 containerd[1508]: time="2024-12-13T13:57:10.996008735Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:57:10.996155 containerd[1508]: time="2024-12-13T13:57:10.996118549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.996278 containerd[1508]: time="2024-12-13T13:57:10.996253163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.997334 containerd[1508]: time="2024-12-13T13:57:10.996361054Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:57:10.997334 containerd[1508]: time="2024-12-13T13:57:10.996463400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:57:10.997334 containerd[1508]: time="2024-12-13T13:57:10.996494101Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:57:10.997334 containerd[1508]: time="2024-12-13T13:57:10.996524654Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:57:10.997334 containerd[1508]: time="2024-12-13T13:57:10.996543881Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:57:10.997334 containerd[1508]: time="2024-12-13T13:57:10.996560493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.997334 containerd[1508]: time="2024-12-13T13:57:10.996591177Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:57:10.997334 containerd[1508]: time="2024-12-13T13:57:10.996620057Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:57:10.997334 containerd[1508]: time="2024-12-13T13:57:10.996656932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:57:10.997708 containerd[1508]: time="2024-12-13T13:57:10.997112695Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:57:10.997708 containerd[1508]: time="2024-12-13T13:57:10.997196043Z" level=info msg="Connect containerd service" Dec 13 13:57:10.997708 containerd[1508]: time="2024-12-13T13:57:10.997243083Z" level=info msg="using legacy CRI server" Dec 13 13:57:10.997708 containerd[1508]: time="2024-12-13T13:57:10.997258778Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:57:10.998325 containerd[1508]: time="2024-12-13T13:57:10.998268058Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:57:10.999714 containerd[1508]: time="2024-12-13T13:57:10.999680653Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:57:11.000481 containerd[1508]: time="2024-12-13T13:57:11.000443962Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:57:11.000665 containerd[1508]: time="2024-12-13T13:57:11.000639218Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:57:11.000868 containerd[1508]: time="2024-12-13T13:57:11.000811224Z" level=info msg="Start subscribing containerd event" Dec 13 13:57:11.000979 containerd[1508]: time="2024-12-13T13:57:11.000956066Z" level=info msg="Start recovering state" Dec 13 13:57:11.001241 containerd[1508]: time="2024-12-13T13:57:11.001217482Z" level=info msg="Start event monitor" Dec 13 13:57:11.001366 containerd[1508]: time="2024-12-13T13:57:11.001342570Z" level=info msg="Start snapshots syncer" Dec 13 13:57:11.001455 containerd[1508]: time="2024-12-13T13:57:11.001434616Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:57:11.001541 containerd[1508]: time="2024-12-13T13:57:11.001521242Z" level=info msg="Start streaming server" Dec 13 13:57:11.001876 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:57:11.008107 containerd[1508]: time="2024-12-13T13:57:11.008051014Z" level=info msg="containerd successfully booted in 0.081084s" Dec 13 13:57:11.021711 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:57:11.033700 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:57:11.053662 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:57:11.053982 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:57:11.065434 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:57:11.082373 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:57:11.091830 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:57:11.103756 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:57:11.104927 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:57:11.335735 tar[1492]: linux-amd64/LICENSE Dec 13 13:57:11.336254 tar[1492]: linux-amd64/README.md Dec 13 13:57:11.340382 systemd-networkd[1417]: eth0: Gained IPv6LL Dec 13 13:57:11.346954 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:57:11.350241 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:57:11.358758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:57:11.375345 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:57:11.379415 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:57:11.401254 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:57:12.241237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:57:12.252014 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:57:12.850169 systemd-networkd[1417]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:3c7:24:19ff:fef4:f1e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:3c7:24:19ff:fef4:f1e/64 assigned by NDisc. Dec 13 13:57:12.850748 systemd-networkd[1417]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 13:57:12.994615 kubelet[1602]: E1213 13:57:12.994403 1602 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:57:12.998526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:57:12.998886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:57:13.000613 systemd[1]: kubelet.service: Consumed 1.129s CPU time. Dec 13 13:57:15.437021 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:57:15.452909 systemd[1]: Started sshd@0-10.244.15.30:22-139.178.68.195:50510.service - OpenSSH per-connection server daemon (139.178.68.195:50510). Dec 13 13:57:16.148798 agetty[1579]: failed to open credentials directory Dec 13 13:57:16.148840 agetty[1580]: failed to open credentials directory Dec 13 13:57:16.174742 login[1580]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 13 13:57:16.175552 login[1579]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:57:16.195142 systemd-logind[1486]: New session 2 of user core. Dec 13 13:57:16.199080 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:57:16.206797 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:57:16.233455 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:57:16.242854 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:57:16.251289 (systemd)[1622]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:57:16.368496 sshd[1614]: Accepted publickey for core from 139.178.68.195 port 50510 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:57:16.371585 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:57:16.379203 systemd-logind[1486]: New session 3 of user core. Dec 13 13:57:16.417193 systemd[1622]: Queued start job for default target default.target. Dec 13 13:57:16.424605 systemd[1622]: Created slice app.slice - User Application Slice. Dec 13 13:57:16.424652 systemd[1622]: Reached target paths.target - Paths. Dec 13 13:57:16.424677 systemd[1622]: Reached target timers.target - Timers. Dec 13 13:57:16.427131 systemd[1622]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:57:16.454209 systemd[1622]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:57:16.454451 systemd[1622]: Reached target sockets.target - Sockets. Dec 13 13:57:16.454479 systemd[1622]: Reached target basic.target - Basic System. Dec 13 13:57:16.454568 systemd[1622]: Reached target default.target - Main User Target. Dec 13 13:57:16.454638 systemd[1622]: Startup finished in 194ms. Dec 13 13:57:16.455016 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:57:16.470792 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:57:16.472677 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:57:17.141747 systemd[1]: Started sshd@1-10.244.15.30:22-139.178.68.195:54626.service - OpenSSH per-connection server daemon (139.178.68.195:54626). Dec 13 13:57:17.177387 login[1580]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:57:17.185446 systemd-logind[1486]: New session 1 of user core. Dec 13 13:57:17.195665 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:57:17.239941 coreos-metadata[1477]: Dec 13 13:57:17.239 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:57:17.268274 coreos-metadata[1477]: Dec 13 13:57:17.268 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 13:57:17.275377 coreos-metadata[1477]: Dec 13 13:57:17.275 INFO Fetch failed with 404: resource not found Dec 13 13:57:17.275377 coreos-metadata[1477]: Dec 13 13:57:17.275 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 13:57:17.275976 coreos-metadata[1477]: Dec 13 13:57:17.275 INFO Fetch successful Dec 13 13:57:17.276122 coreos-metadata[1477]: Dec 13 13:57:17.276 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 13:57:17.290419 coreos-metadata[1477]: Dec 13 13:57:17.290 INFO Fetch successful Dec 13 13:57:17.290652 coreos-metadata[1477]: Dec 13 13:57:17.290 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 13:57:17.322189 coreos-metadata[1477]: Dec 13 13:57:17.322 INFO Fetch successful Dec 13 13:57:17.322662 coreos-metadata[1477]: Dec 13 13:57:17.322 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 13:57:17.337090 coreos-metadata[1477]: Dec 13 13:57:17.337 INFO Fetch successful Dec 13 13:57:17.337254 coreos-metadata[1477]: Dec 13 13:57:17.337 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 13:57:17.355648 coreos-metadata[1477]: Dec 13 13:57:17.355 INFO Fetch successful Dec 13 13:57:17.401220 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 13:57:17.403131 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:57:17.846057 coreos-metadata[1555]: Dec 13 13:57:17.845 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:57:17.868614 coreos-metadata[1555]: Dec 13 13:57:17.868 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 13:57:17.893173 coreos-metadata[1555]: Dec 13 13:57:17.893 INFO Fetch successful Dec 13 13:57:17.893618 coreos-metadata[1555]: Dec 13 13:57:17.893 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 13:57:17.925768 coreos-metadata[1555]: Dec 13 13:57:17.925 INFO Fetch successful Dec 13 13:57:17.927744 unknown[1555]: wrote ssh authorized keys file for user: core Dec 13 13:57:17.948103 update-ssh-keys[1665]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:57:17.948695 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 13:57:17.952261 systemd[1]: Finished sshkeys.service. Dec 13 13:57:17.953735 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:57:17.954167 systemd[1]: Startup finished in 1.352s (kernel) + 15.597s (initrd) + 11.896s (userspace) = 28.846s. Dec 13 13:57:18.035223 sshd[1645]: Accepted publickey for core from 139.178.68.195 port 54626 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:57:18.037216 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:57:18.044778 systemd-logind[1486]: New session 4 of user core. Dec 13 13:57:18.059697 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:57:18.654397 sshd[1668]: Connection closed by 139.178.68.195 port 54626 Dec 13 13:57:18.655425 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Dec 13 13:57:18.660523 systemd[1]: sshd@1-10.244.15.30:22-139.178.68.195:54626.service: Deactivated successfully. Dec 13 13:57:18.662633 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:57:18.663584 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:57:18.665040 systemd-logind[1486]: Removed session 4. Dec 13 13:57:18.817757 systemd[1]: Started sshd@2-10.244.15.30:22-139.178.68.195:54628.service - OpenSSH per-connection server daemon (139.178.68.195:54628). Dec 13 13:57:19.708120 sshd[1673]: Accepted publickey for core from 139.178.68.195 port 54628 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:57:19.710277 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:57:19.717428 systemd-logind[1486]: New session 5 of user core. Dec 13 13:57:19.724545 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:57:20.322027 sshd[1675]: Connection closed by 139.178.68.195 port 54628 Dec 13 13:57:20.321050 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Dec 13 13:57:20.325175 systemd[1]: sshd@2-10.244.15.30:22-139.178.68.195:54628.service: Deactivated successfully. Dec 13 13:57:20.327467 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:57:20.330335 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:57:20.331757 systemd-logind[1486]: Removed session 5. Dec 13 13:57:20.479154 systemd[1]: Started sshd@3-10.244.15.30:22-139.178.68.195:54634.service - OpenSSH per-connection server daemon (139.178.68.195:54634). Dec 13 13:57:21.386787 sshd[1680]: Accepted publickey for core from 139.178.68.195 port 54634 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:57:21.389511 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:57:21.397851 systemd-logind[1486]: New session 6 of user core. Dec 13 13:57:21.405640 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:57:22.009228 sshd[1682]: Connection closed by 139.178.68.195 port 54634 Dec 13 13:57:22.010228 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Dec 13 13:57:22.015843 systemd[1]: sshd@3-10.244.15.30:22-139.178.68.195:54634.service: Deactivated successfully. Dec 13 13:57:22.018542 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:57:22.019827 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:57:22.021103 systemd-logind[1486]: Removed session 6. Dec 13 13:57:22.166672 systemd[1]: Started sshd@4-10.244.15.30:22-139.178.68.195:54638.service - OpenSSH per-connection server daemon (139.178.68.195:54638). Dec 13 13:57:23.068087 sshd[1687]: Accepted publickey for core from 139.178.68.195 port 54638 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:57:23.070461 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:57:23.072099 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:57:23.079616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:57:23.084079 systemd-logind[1486]: New session 7 of user core. Dec 13 13:57:23.088185 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:57:23.243578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:57:23.250523 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:57:23.351776 kubelet[1698]: E1213 13:57:23.351692 1698 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:57:23.356652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:57:23.356901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:57:23.558831 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:57:23.559364 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:57:23.577011 sudo[1706]: pam_unix(sudo:session): session closed for user root Dec 13 13:57:23.720865 sshd[1692]: Connection closed by 139.178.68.195 port 54638 Dec 13 13:57:23.722521 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Dec 13 13:57:23.727092 systemd[1]: sshd@4-10.244.15.30:22-139.178.68.195:54638.service: Deactivated successfully. Dec 13 13:57:23.729384 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:57:23.731224 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:57:23.733209 systemd-logind[1486]: Removed session 7. Dec 13 13:57:23.881869 systemd[1]: Started sshd@5-10.244.15.30:22-139.178.68.195:54642.service - OpenSSH per-connection server daemon (139.178.68.195:54642). Dec 13 13:57:24.773481 sshd[1711]: Accepted publickey for core from 139.178.68.195 port 54642 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:57:24.776070 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:57:24.784305 systemd-logind[1486]: New session 8 of user core. Dec 13 13:57:24.794685 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:57:25.252071 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:57:25.252641 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:57:25.258393 sudo[1715]: pam_unix(sudo:session): session closed for user root Dec 13 13:57:25.266960 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:57:25.267478 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:57:25.297829 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:57:25.337722 augenrules[1737]: No rules Dec 13 13:57:25.338719 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:57:25.339006 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:57:25.340745 sudo[1714]: pam_unix(sudo:session): session closed for user root Dec 13 13:57:25.483597 sshd[1713]: Connection closed by 139.178.68.195 port 54642 Dec 13 13:57:25.484594 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Dec 13 13:57:25.489930 systemd[1]: sshd@5-10.244.15.30:22-139.178.68.195:54642.service: Deactivated successfully. Dec 13 13:57:25.492166 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:57:25.493086 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:57:25.494845 systemd-logind[1486]: Removed session 8. Dec 13 13:57:25.645991 systemd[1]: Started sshd@6-10.244.15.30:22-139.178.68.195:54652.service - OpenSSH per-connection server daemon (139.178.68.195:54652). Dec 13 13:57:26.535093 sshd[1745]: Accepted publickey for core from 139.178.68.195 port 54652 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:57:26.537159 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:57:26.546154 systemd-logind[1486]: New session 9 of user core. Dec 13 13:57:26.552538 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:57:27.011110 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:57:27.012347 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:57:27.502053 (dockerd)[1766]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:57:27.502895 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:57:27.911206 dockerd[1766]: time="2024-12-13T13:57:27.911032271Z" level=info msg="Starting up" Dec 13 13:57:28.056002 systemd[1]: var-lib-docker-metacopy\x2dcheck1673041194-merged.mount: Deactivated successfully. Dec 13 13:57:28.077746 dockerd[1766]: time="2024-12-13T13:57:28.077594166Z" level=info msg="Loading containers: start." Dec 13 13:57:28.294368 kernel: Initializing XFRM netlink socket Dec 13 13:57:28.412926 systemd-networkd[1417]: docker0: Link UP Dec 13 13:57:28.453936 dockerd[1766]: time="2024-12-13T13:57:28.453808575Z" level=info msg="Loading containers: done." Dec 13 13:57:28.483422 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1897220414-merged.mount: Deactivated successfully. Dec 13 13:57:28.484612 dockerd[1766]: time="2024-12-13T13:57:28.483767808Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:57:28.484612 dockerd[1766]: time="2024-12-13T13:57:28.484161331Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:57:28.484612 dockerd[1766]: time="2024-12-13T13:57:28.484409333Z" level=info msg="Daemon has completed initialization" Dec 13 13:57:28.528082 dockerd[1766]: time="2024-12-13T13:57:28.527377927Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:57:28.528439 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:57:29.825653 containerd[1508]: time="2024-12-13T13:57:29.824950652Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 13:57:30.610011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2767047816.mount: Deactivated successfully. Dec 13 13:57:33.133760 containerd[1508]: time="2024-12-13T13:57:33.132836287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:33.135946 containerd[1508]: time="2024-12-13T13:57:33.134768092Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Dec 13 13:57:33.136171 containerd[1508]: time="2024-12-13T13:57:33.136105651Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:33.139908 containerd[1508]: time="2024-12-13T13:57:33.139833605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:33.142086 containerd[1508]: time="2024-12-13T13:57:33.141592874Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.316205855s" Dec 13 13:57:33.142086 containerd[1508]: time="2024-12-13T13:57:33.141680929Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 13:57:33.179565 containerd[1508]: time="2024-12-13T13:57:33.179477894Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 13:57:33.587758 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:57:33.598619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:57:33.786891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:57:33.803177 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:57:33.889510 kubelet[2030]: E1213 13:57:33.889047 2030 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:57:33.893788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:57:33.894101 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:57:36.134413 containerd[1508]: time="2024-12-13T13:57:36.133484808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:36.138750 containerd[1508]: time="2024-12-13T13:57:36.135594077Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Dec 13 13:57:36.138750 containerd[1508]: time="2024-12-13T13:57:36.137818956Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:36.142880 containerd[1508]: time="2024-12-13T13:57:36.142786309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:36.144811 containerd[1508]: time="2024-12-13T13:57:36.144413347Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.964857516s" Dec 13 13:57:36.144811 containerd[1508]: time="2024-12-13T13:57:36.144496142Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 13:57:36.183362 containerd[1508]: time="2024-12-13T13:57:36.183277351Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 13:57:37.813937 containerd[1508]: time="2024-12-13T13:57:37.813870271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:37.816057 containerd[1508]: time="2024-12-13T13:57:37.815692192Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Dec 13 13:57:37.817000 containerd[1508]: time="2024-12-13T13:57:37.816957106Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:37.836804 containerd[1508]: time="2024-12-13T13:57:37.836728176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:37.838795 containerd[1508]: time="2024-12-13T13:57:37.838537911Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.655178429s" Dec 13 13:57:37.838795 containerd[1508]: time="2024-12-13T13:57:37.838584698Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 13:57:37.869790 containerd[1508]: time="2024-12-13T13:57:37.869729003Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 13:57:39.429161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount316667733.mount: Deactivated successfully. Dec 13 13:57:40.133011 containerd[1508]: time="2024-12-13T13:57:40.131904352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:40.134785 containerd[1508]: time="2024-12-13T13:57:40.134652419Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Dec 13 13:57:40.135553 containerd[1508]: time="2024-12-13T13:57:40.135453308Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:40.139574 containerd[1508]: time="2024-12-13T13:57:40.139516496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:40.141056 containerd[1508]: time="2024-12-13T13:57:40.140693164Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.270639741s" Dec 13 13:57:40.141056 containerd[1508]: time="2024-12-13T13:57:40.140765876Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 13:57:40.177746 containerd[1508]: time="2024-12-13T13:57:40.177685507Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:57:40.772095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432770279.mount: Deactivated successfully. Dec 13 13:57:42.021627 containerd[1508]: time="2024-12-13T13:57:42.020928615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:42.026335 containerd[1508]: time="2024-12-13T13:57:42.023164224Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 13:57:42.027338 containerd[1508]: time="2024-12-13T13:57:42.026780899Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:42.028898 containerd[1508]: time="2024-12-13T13:57:42.028494631Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.850750608s" Dec 13 13:57:42.028898 containerd[1508]: time="2024-12-13T13:57:42.028547391Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 13:57:42.030385 containerd[1508]: time="2024-12-13T13:57:42.030333602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:42.064914 containerd[1508]: time="2024-12-13T13:57:42.064853074Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:57:42.678462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560735610.mount: Deactivated successfully. Dec 13 13:57:42.683830 containerd[1508]: time="2024-12-13T13:57:42.683678965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:42.685317 containerd[1508]: time="2024-12-13T13:57:42.684988282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Dec 13 13:57:42.685317 containerd[1508]: time="2024-12-13T13:57:42.685237048Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:42.688320 containerd[1508]: time="2024-12-13T13:57:42.688240195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:42.689898 containerd[1508]: time="2024-12-13T13:57:42.689616267Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 624.70464ms" Dec 13 13:57:42.689898 containerd[1508]: time="2024-12-13T13:57:42.689687203Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 13:57:42.726355 containerd[1508]: time="2024-12-13T13:57:42.726264245Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 13:57:42.891478 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 13:57:43.351242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2460722521.mount: Deactivated successfully. Dec 13 13:57:44.086052 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 13:57:44.095589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:57:44.372635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:57:44.384877 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:57:44.496401 kubelet[2176]: E1213 13:57:44.496135 2176 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:57:44.499656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:57:44.499939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:57:49.189413 containerd[1508]: time="2024-12-13T13:57:49.188629557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:49.191114 containerd[1508]: time="2024-12-13T13:57:49.190486440Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Dec 13 13:57:49.192513 containerd[1508]: time="2024-12-13T13:57:49.191815008Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:49.196886 containerd[1508]: time="2024-12-13T13:57:49.196849353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:57:49.199729 containerd[1508]: time="2024-12-13T13:57:49.199589645Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 6.473237528s" Dec 13 13:57:49.199729 containerd[1508]: time="2024-12-13T13:57:49.199673386Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 13:57:53.883635 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:57:53.901705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:57:53.934141 systemd[1]: Reloading requested from client PID 2253 ('systemctl') (unit session-9.scope)... Dec 13 13:57:53.934196 systemd[1]: Reloading... Dec 13 13:57:54.112340 zram_generator::config[2289]: No configuration found. Dec 13 13:57:54.285009 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:57:54.400914 systemd[1]: Reloading finished in 466 ms. Dec 13 13:57:54.478047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:57:54.483248 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:57:54.487161 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:57:54.487482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:57:54.500796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:57:54.646570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:57:54.649960 (kubelet)[2361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:57:54.752061 kubelet[2361]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:57:54.752061 kubelet[2361]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:57:54.752061 kubelet[2361]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:57:54.754653 kubelet[2361]: I1213 13:57:54.754535 2361 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:57:55.120742 update_engine[1488]: I20241213 13:57:55.120538 1488 update_attempter.cc:509] Updating boot flags... Dec 13 13:57:55.134478 kubelet[2361]: I1213 13:57:55.134439 2361 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:57:55.136340 kubelet[2361]: I1213 13:57:55.134898 2361 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:57:55.136340 kubelet[2361]: I1213 13:57:55.135233 2361 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:57:55.180028 kubelet[2361]: E1213 13:57:55.179982 2361 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.15.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:55.182433 kubelet[2361]: I1213 13:57:55.182402 2361 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:57:55.206191 kubelet[2361]: I1213 13:57:55.206122 2361 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:57:55.209030 kubelet[2361]: I1213 13:57:55.208989 2361 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:57:55.210320 kubelet[2361]: I1213 13:57:55.210179 2361 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:57:55.210320 kubelet[2361]: I1213 13:57:55.210244 2361 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:57:55.210320 kubelet[2361]: I1213 13:57:55.210264 2361 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:57:55.210673 kubelet[2361]: I1213 13:57:55.210489 2361 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:57:55.213522 kubelet[2361]: I1213 13:57:55.213493 2361 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:57:55.213605 kubelet[2361]: I1213 13:57:55.213538 2361 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:57:55.216319 kubelet[2361]: I1213 13:57:55.214957 2361 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:57:55.216319 kubelet[2361]: I1213 13:57:55.215026 2361 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:57:55.216319 kubelet[2361]: W1213 13:57:55.215590 2361 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.244.15.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3exgq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:55.216319 kubelet[2361]: E1213 13:57:55.215662 2361 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.15.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3exgq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:55.216887 kubelet[2361]: W1213 13:57:55.216525 2361 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.244.15.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:55.216887 kubelet[2361]: E1213 13:57:55.216605 2361 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.15.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:55.217277 kubelet[2361]: I1213 13:57:55.217248 2361 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:57:55.222326 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2376) Dec 13 13:57:55.238729 kubelet[2361]: I1213 13:57:55.238686 2361 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:57:55.259331 kubelet[2361]: W1213 13:57:55.258138 2361 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:57:55.263059 kubelet[2361]: I1213 13:57:55.263021 2361 server.go:1256] "Started kubelet" Dec 13 13:57:55.266449 kubelet[2361]: I1213 13:57:55.266327 2361 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:57:55.267772 kubelet[2361]: I1213 13:57:55.267702 2361 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:57:55.268202 kubelet[2361]: I1213 13:57:55.268176 2361 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:57:55.270020 kubelet[2361]: I1213 13:57:55.269963 2361 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:57:55.276529 kubelet[2361]: E1213 13:57:55.276493 2361 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.15.30:6443/api/v1/namespaces/default/events\": dial tcp 10.244.15.30:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-3exgq.gb1.brightbox.com.1810c13196aada8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-3exgq.gb1.brightbox.com,UID:srv-3exgq.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-3exgq.gb1.brightbox.com,},FirstTimestamp:2024-12-13 13:57:55.262978702 +0000 UTC m=+0.605361861,LastTimestamp:2024-12-13 13:57:55.262978702 +0000 UTC m=+0.605361861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-3exgq.gb1.brightbox.com,}" Dec 13 13:57:55.284131 kubelet[2361]: I1213 13:57:55.283092 2361 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:57:55.299319 kubelet[2361]: I1213 13:57:55.298334 2361 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:57:55.304917 kubelet[2361]: I1213 13:57:55.304873 2361 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:57:55.305062 kubelet[2361]: I1213 13:57:55.305036 2361 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:57:55.308897 kubelet[2361]: E1213 13:57:55.308870 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.15.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3exgq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.15.30:6443: connect: connection refused" interval="200ms" Dec 13 13:57:55.315765 kubelet[2361]: W1213 13:57:55.314929 2361 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.244.15.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:55.315765 kubelet[2361]: E1213 13:57:55.314996 2361 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.15.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:55.321809 kubelet[2361]: I1213 13:57:55.321779 2361 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:57:55.321809 kubelet[2361]: I1213 13:57:55.321805 2361 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:57:55.321973 kubelet[2361]: I1213 13:57:55.321888 2361 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:57:55.350321 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2378) Dec 13 13:57:55.355321 kubelet[2361]: E1213 13:57:55.354367 2361 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:57:55.385771 kubelet[2361]: I1213 13:57:55.385642 2361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:57:55.393524 kubelet[2361]: I1213 13:57:55.393498 2361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:57:55.394887 kubelet[2361]: I1213 13:57:55.393704 2361 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:57:55.394887 kubelet[2361]: I1213 13:57:55.393766 2361 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:57:55.394887 kubelet[2361]: E1213 13:57:55.393858 2361 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:57:55.395691 kubelet[2361]: W1213 13:57:55.395655 2361 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.244.15.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:55.395775 kubelet[2361]: E1213 13:57:55.395700 2361 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.15.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:55.424818 kubelet[2361]: I1213 13:57:55.424783 2361 kubelet_node_status.go:73] "Attempting to register node" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.428337 kubelet[2361]: I1213 13:57:55.424903 2361 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:57:55.428337 kubelet[2361]: I1213 13:57:55.426470 2361 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:57:55.428337 kubelet[2361]: I1213 13:57:55.426518 2361 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:57:55.429058 kubelet[2361]: E1213 13:57:55.429029 2361 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.15.30:6443/api/v1/nodes\": dial tcp 10.244.15.30:6443: connect: connection refused" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.434979 kubelet[2361]: I1213 13:57:55.434950 2361 policy_none.go:49] "None policy: Start" Dec 13 13:57:55.437096 kubelet[2361]: I1213 13:57:55.437070 2361 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:57:55.437197 kubelet[2361]: I1213 13:57:55.437118 2361 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:57:55.456721 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:57:55.467334 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2378) Dec 13 13:57:55.490623 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:57:55.498331 kubelet[2361]: E1213 13:57:55.496930 2361 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:57:55.510346 kubelet[2361]: E1213 13:57:55.509989 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.15.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3exgq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.15.30:6443: connect: connection refused" interval="400ms" Dec 13 13:57:55.525784 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:57:55.548859 kubelet[2361]: I1213 13:57:55.548815 2361 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:57:55.554356 kubelet[2361]: I1213 13:57:55.553024 2361 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:57:55.555877 kubelet[2361]: E1213 13:57:55.555832 2361 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-3exgq.gb1.brightbox.com\" not found" Dec 13 13:57:55.632793 kubelet[2361]: I1213 13:57:55.632742 2361 kubelet_node_status.go:73] "Attempting to register node" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.633342 kubelet[2361]: E1213 13:57:55.633320 2361 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.15.30:6443/api/v1/nodes\": dial tcp 10.244.15.30:6443: connect: connection refused" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.698950 kubelet[2361]: I1213 13:57:55.698727 2361 topology_manager.go:215] "Topology Admit Handler" podUID="6110810b160ee4fa74a9e62e8795f98e" podNamespace="kube-system" podName="kube-scheduler-srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.702368 kubelet[2361]: I1213 13:57:55.702019 2361 topology_manager.go:215] "Topology Admit Handler" podUID="fd86c6dd2616273d8f441df16a16fc64" podNamespace="kube-system" podName="kube-apiserver-srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.705068 kubelet[2361]: I1213 13:57:55.705038 2361 topology_manager.go:215] "Topology Admit Handler" podUID="a7bd20a56264e482813a3c8830415882" podNamespace="kube-system" podName="kube-controller-manager-srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.708575 kubelet[2361]: I1213 13:57:55.708547 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd86c6dd2616273d8f441df16a16fc64-usr-share-ca-certificates\") pod \"kube-apiserver-srv-3exgq.gb1.brightbox.com\" (UID: \"fd86c6dd2616273d8f441df16a16fc64\") " pod="kube-system/kube-apiserver-srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.710597 kubelet[2361]: I1213 13:57:55.709348 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a7bd20a56264e482813a3c8830415882-flexvolume-dir\") pod \"kube-controller-manager-srv-3exgq.gb1.brightbox.com\" (UID: \"a7bd20a56264e482813a3c8830415882\") " pod="kube-system/kube-controller-manager-srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.710597 kubelet[2361]: I1213 13:57:55.709457 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7bd20a56264e482813a3c8830415882-k8s-certs\") pod \"kube-controller-manager-srv-3exgq.gb1.brightbox.com\" (UID: \"a7bd20a56264e482813a3c8830415882\") " pod="kube-system/kube-controller-manager-srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.710597 kubelet[2361]: I1213 13:57:55.709666 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6110810b160ee4fa74a9e62e8795f98e-kubeconfig\") pod \"kube-scheduler-srv-3exgq.gb1.brightbox.com\" (UID: \"6110810b160ee4fa74a9e62e8795f98e\") " pod="kube-system/kube-scheduler-srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.710597 kubelet[2361]: I1213 13:57:55.709744 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd86c6dd2616273d8f441df16a16fc64-ca-certs\") pod \"kube-apiserver-srv-3exgq.gb1.brightbox.com\" (UID: \"fd86c6dd2616273d8f441df16a16fc64\") " pod="kube-system/kube-apiserver-srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.710597 kubelet[2361]: I1213 13:57:55.709831 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7bd20a56264e482813a3c8830415882-ca-certs\") pod \"kube-controller-manager-srv-3exgq.gb1.brightbox.com\" (UID: \"a7bd20a56264e482813a3c8830415882\") " pod="kube-system/kube-controller-manager-srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.710899 kubelet[2361]: I1213 13:57:55.709919 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7bd20a56264e482813a3c8830415882-kubeconfig\") pod \"kube-controller-manager-srv-3exgq.gb1.brightbox.com\" (UID: \"a7bd20a56264e482813a3c8830415882\") " pod="kube-system/kube-controller-manager-srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.710899 kubelet[2361]: I1213 13:57:55.709990 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7bd20a56264e482813a3c8830415882-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-3exgq.gb1.brightbox.com\" (UID: \"a7bd20a56264e482813a3c8830415882\") " pod="kube-system/kube-controller-manager-srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.710899 kubelet[2361]: I1213 13:57:55.710025 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd86c6dd2616273d8f441df16a16fc64-k8s-certs\") pod \"kube-apiserver-srv-3exgq.gb1.brightbox.com\" (UID: \"fd86c6dd2616273d8f441df16a16fc64\") " pod="kube-system/kube-apiserver-srv-3exgq.gb1.brightbox.com" Dec 13 13:57:55.719120 systemd[1]: Created slice kubepods-burstable-pod6110810b160ee4fa74a9e62e8795f98e.slice - libcontainer container kubepods-burstable-pod6110810b160ee4fa74a9e62e8795f98e.slice. Dec 13 13:57:55.736355 systemd[1]: Created slice kubepods-burstable-podfd86c6dd2616273d8f441df16a16fc64.slice - libcontainer container kubepods-burstable-podfd86c6dd2616273d8f441df16a16fc64.slice. Dec 13 13:57:55.744333 systemd[1]: Created slice kubepods-burstable-poda7bd20a56264e482813a3c8830415882.slice - libcontainer container kubepods-burstable-poda7bd20a56264e482813a3c8830415882.slice. Dec 13 13:57:55.911016 kubelet[2361]: E1213 13:57:55.910919 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.15.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3exgq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.15.30:6443: connect: connection refused" interval="800ms" Dec 13 13:57:56.034420 containerd[1508]: time="2024-12-13T13:57:56.033361524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-3exgq.gb1.brightbox.com,Uid:6110810b160ee4fa74a9e62e8795f98e,Namespace:kube-system,Attempt:0,}" Dec 13 13:57:56.037741 kubelet[2361]: I1213 13:57:56.037700 2361 kubelet_node_status.go:73] "Attempting to register node" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:57:56.038186 kubelet[2361]: E1213 13:57:56.038162 2361 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.15.30:6443/api/v1/nodes\": dial tcp 10.244.15.30:6443: connect: connection refused" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:57:56.041018 containerd[1508]: time="2024-12-13T13:57:56.040948634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-3exgq.gb1.brightbox.com,Uid:fd86c6dd2616273d8f441df16a16fc64,Namespace:kube-system,Attempt:0,}" Dec 13 13:57:56.049232 containerd[1508]: time="2024-12-13T13:57:56.048953909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-3exgq.gb1.brightbox.com,Uid:a7bd20a56264e482813a3c8830415882,Namespace:kube-system,Attempt:0,}" Dec 13 13:57:56.149093 kubelet[2361]: W1213 13:57:56.148933 2361 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.244.15.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:56.149093 kubelet[2361]: E1213 13:57:56.149052 2361 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.15.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:56.365428 kubelet[2361]: W1213 13:57:56.365240 2361 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.244.15.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:56.365428 kubelet[2361]: E1213 13:57:56.365385 2361 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.15.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:56.407981 kubelet[2361]: W1213 13:57:56.407897 2361 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.244.15.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3exgq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:56.407981 kubelet[2361]: E1213 13:57:56.407980 2361 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.15.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3exgq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:56.458137 kubelet[2361]: W1213 13:57:56.458053 2361 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.244.15.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:56.458698 kubelet[2361]: E1213 13:57:56.458139 2361 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.15.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:56.627539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1053074502.mount: Deactivated successfully. Dec 13 13:57:56.652216 containerd[1508]: time="2024-12-13T13:57:56.652034797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:57:56.654426 containerd[1508]: time="2024-12-13T13:57:56.654345356Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:57:56.661228 containerd[1508]: time="2024-12-13T13:57:56.661096481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 13:57:56.662826 containerd[1508]: time="2024-12-13T13:57:56.662729739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:57:56.664835 containerd[1508]: time="2024-12-13T13:57:56.664591194Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:57:56.668487 containerd[1508]: time="2024-12-13T13:57:56.668447508Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:57:56.669615 containerd[1508]: time="2024-12-13T13:57:56.669574746Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:57:56.674330 containerd[1508]: time="2024-12-13T13:57:56.673809673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:57:56.675781 containerd[1508]: time="2024-12-13T13:57:56.675741588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 626.676641ms" Dec 13 13:57:56.677370 containerd[1508]: time="2024-12-13T13:57:56.677336071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 643.70894ms" Dec 13 13:57:56.684406 containerd[1508]: time="2024-12-13T13:57:56.684368090Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 643.338606ms" Dec 13 13:57:56.712228 kubelet[2361]: E1213 13:57:56.712085 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.15.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3exgq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.15.30:6443: connect: connection refused" interval="1.6s" Dec 13 13:57:56.845906 kubelet[2361]: I1213 13:57:56.845583 2361 kubelet_node_status.go:73] "Attempting to register node" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:57:56.847434 kubelet[2361]: E1213 13:57:56.847398 2361 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.15.30:6443/api/v1/nodes\": dial tcp 10.244.15.30:6443: connect: connection refused" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:57:56.869402 containerd[1508]: time="2024-12-13T13:57:56.865033469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:57:56.869402 containerd[1508]: time="2024-12-13T13:57:56.865137973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:57:56.869402 containerd[1508]: time="2024-12-13T13:57:56.865168645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:57:56.869402 containerd[1508]: time="2024-12-13T13:57:56.865307696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:57:56.875694 containerd[1508]: time="2024-12-13T13:57:56.875497091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:57:56.875694 containerd[1508]: time="2024-12-13T13:57:56.875513851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:57:56.875992 containerd[1508]: time="2024-12-13T13:57:56.875782582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:57:56.875992 containerd[1508]: time="2024-12-13T13:57:56.875805520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:57:56.875992 containerd[1508]: time="2024-12-13T13:57:56.875666479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:57:56.875992 containerd[1508]: time="2024-12-13T13:57:56.875946404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:57:56.876542 containerd[1508]: time="2024-12-13T13:57:56.876395349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:57:56.876647 containerd[1508]: time="2024-12-13T13:57:56.876572405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:57:56.914025 systemd[1]: Started cri-containerd-25947ed16a233065baf56419b7b6d5ad6f0f2d02a9bfa5f71dc175af830427b7.scope - libcontainer container 25947ed16a233065baf56419b7b6d5ad6f0f2d02a9bfa5f71dc175af830427b7. Dec 13 13:57:56.927390 systemd[1]: Started cri-containerd-5b00de08b936776c434024ef32066f94cdf9f2593bd4557cb08a4ba72155febb.scope - libcontainer container 5b00de08b936776c434024ef32066f94cdf9f2593bd4557cb08a4ba72155febb. Dec 13 13:57:56.940494 systemd[1]: Started cri-containerd-c32a80c09042dd6f5ce461bfd73b2e8935076c1ba7d250ca4750c096da6be834.scope - libcontainer container c32a80c09042dd6f5ce461bfd73b2e8935076c1ba7d250ca4750c096da6be834. Dec 13 13:57:57.040485 containerd[1508]: time="2024-12-13T13:57:57.040395459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-3exgq.gb1.brightbox.com,Uid:fd86c6dd2616273d8f441df16a16fc64,Namespace:kube-system,Attempt:0,} returns sandbox id \"25947ed16a233065baf56419b7b6d5ad6f0f2d02a9bfa5f71dc175af830427b7\"" Dec 13 13:57:57.062022 containerd[1508]: time="2024-12-13T13:57:57.060817298Z" level=info msg="CreateContainer within sandbox \"25947ed16a233065baf56419b7b6d5ad6f0f2d02a9bfa5f71dc175af830427b7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:57:57.073853 containerd[1508]: time="2024-12-13T13:57:57.073778428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-3exgq.gb1.brightbox.com,Uid:6110810b160ee4fa74a9e62e8795f98e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b00de08b936776c434024ef32066f94cdf9f2593bd4557cb08a4ba72155febb\"" Dec 13 13:57:57.079658 containerd[1508]: time="2024-12-13T13:57:57.079604503Z" level=info msg="CreateContainer within sandbox \"5b00de08b936776c434024ef32066f94cdf9f2593bd4557cb08a4ba72155febb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:57:57.080429 containerd[1508]: time="2024-12-13T13:57:57.080393223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-3exgq.gb1.brightbox.com,Uid:a7bd20a56264e482813a3c8830415882,Namespace:kube-system,Attempt:0,} returns sandbox id \"c32a80c09042dd6f5ce461bfd73b2e8935076c1ba7d250ca4750c096da6be834\"" Dec 13 13:57:57.085439 containerd[1508]: time="2024-12-13T13:57:57.085390693Z" level=info msg="CreateContainer within sandbox \"c32a80c09042dd6f5ce461bfd73b2e8935076c1ba7d250ca4750c096da6be834\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:57:57.105579 containerd[1508]: time="2024-12-13T13:57:57.105514328Z" level=info msg="CreateContainer within sandbox \"25947ed16a233065baf56419b7b6d5ad6f0f2d02a9bfa5f71dc175af830427b7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"31554380ca110555da628417197d908477a41715ca48e4b24ee4c2367e156f12\"" Dec 13 13:57:57.107224 containerd[1508]: time="2024-12-13T13:57:57.107185106Z" level=info msg="StartContainer for \"31554380ca110555da628417197d908477a41715ca48e4b24ee4c2367e156f12\"" Dec 13 13:57:57.108376 containerd[1508]: time="2024-12-13T13:57:57.107577048Z" level=info msg="CreateContainer within sandbox \"5b00de08b936776c434024ef32066f94cdf9f2593bd4557cb08a4ba72155febb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9bc2419f34ed1abafdc90e842a745e3063b2ba6af1c3c7d9859934130f43d2f4\"" Dec 13 13:57:57.109328 containerd[1508]: time="2024-12-13T13:57:57.109044694Z" level=info msg="StartContainer for \"9bc2419f34ed1abafdc90e842a745e3063b2ba6af1c3c7d9859934130f43d2f4\"" Dec 13 13:57:57.114798 containerd[1508]: time="2024-12-13T13:57:57.114656821Z" level=info msg="CreateContainer within sandbox \"c32a80c09042dd6f5ce461bfd73b2e8935076c1ba7d250ca4750c096da6be834\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"091240a3ad63c79dcff6bad1d9593dd8aa676b6877a21c99b3fc328079fa1621\"" Dec 13 13:57:57.115412 containerd[1508]: time="2024-12-13T13:57:57.115353603Z" level=info msg="StartContainer for \"091240a3ad63c79dcff6bad1d9593dd8aa676b6877a21c99b3fc328079fa1621\"" Dec 13 13:57:57.156531 systemd[1]: Started cri-containerd-31554380ca110555da628417197d908477a41715ca48e4b24ee4c2367e156f12.scope - libcontainer container 31554380ca110555da628417197d908477a41715ca48e4b24ee4c2367e156f12. Dec 13 13:57:57.181577 systemd[1]: Started cri-containerd-9bc2419f34ed1abafdc90e842a745e3063b2ba6af1c3c7d9859934130f43d2f4.scope - libcontainer container 9bc2419f34ed1abafdc90e842a745e3063b2ba6af1c3c7d9859934130f43d2f4. Dec 13 13:57:57.203496 systemd[1]: Started cri-containerd-091240a3ad63c79dcff6bad1d9593dd8aa676b6877a21c99b3fc328079fa1621.scope - libcontainer container 091240a3ad63c79dcff6bad1d9593dd8aa676b6877a21c99b3fc328079fa1621. Dec 13 13:57:57.287278 containerd[1508]: time="2024-12-13T13:57:57.286473734Z" level=info msg="StartContainer for \"9bc2419f34ed1abafdc90e842a745e3063b2ba6af1c3c7d9859934130f43d2f4\" returns successfully" Dec 13 13:57:57.290621 containerd[1508]: time="2024-12-13T13:57:57.290519780Z" level=info msg="StartContainer for \"31554380ca110555da628417197d908477a41715ca48e4b24ee4c2367e156f12\" returns successfully" Dec 13 13:57:57.317176 containerd[1508]: time="2024-12-13T13:57:57.317082351Z" level=info msg="StartContainer for \"091240a3ad63c79dcff6bad1d9593dd8aa676b6877a21c99b3fc328079fa1621\" returns successfully" Dec 13 13:57:57.365279 kubelet[2361]: E1213 13:57:57.365063 2361 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.15.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.15.30:6443: connect: connection refused Dec 13 13:57:58.452229 kubelet[2361]: I1213 13:57:58.452141 2361 kubelet_node_status.go:73] "Attempting to register node" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:58:00.032513 kubelet[2361]: E1213 13:58:00.032392 2361 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-3exgq.gb1.brightbox.com\" not found" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:58:00.075412 kubelet[2361]: I1213 13:58:00.075281 2361 kubelet_node_status.go:76] "Successfully registered node" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:58:00.218350 kubelet[2361]: I1213 13:58:00.218303 2361 apiserver.go:52] "Watching apiserver" Dec 13 13:58:00.306053 kubelet[2361]: I1213 13:58:00.305279 2361 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:58:02.857911 systemd[1]: Reloading requested from client PID 2655 ('systemctl') (unit session-9.scope)... Dec 13 13:58:02.857968 systemd[1]: Reloading... Dec 13 13:58:02.988357 zram_generator::config[2697]: No configuration found. Dec 13 13:58:03.094347 kubelet[2361]: W1213 13:58:03.094215 2361 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:58:03.181200 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:58:03.315070 systemd[1]: Reloading finished in 456 ms. Dec 13 13:58:03.387357 kubelet[2361]: I1213 13:58:03.387300 2361 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:58:03.387747 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:58:03.400865 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:58:03.401261 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:58:03.401394 systemd[1]: kubelet.service: Consumed 1.113s CPU time, 112.3M memory peak, 0B memory swap peak. Dec 13 13:58:03.411810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:58:03.620850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:58:03.640804 (kubelet)[2758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:58:03.759924 sudo[2770]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 13:58:03.760569 sudo[2770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 13:58:03.769359 kubelet[2758]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:58:03.769359 kubelet[2758]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:58:03.769359 kubelet[2758]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:58:03.769359 kubelet[2758]: I1213 13:58:03.768769 2758 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:58:03.776912 kubelet[2758]: I1213 13:58:03.776836 2758 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:58:03.776912 kubelet[2758]: I1213 13:58:03.776868 2758 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:58:03.777156 kubelet[2758]: I1213 13:58:03.777109 2758 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:58:03.783335 kubelet[2758]: I1213 13:58:03.782570 2758 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:58:03.786903 kubelet[2758]: I1213 13:58:03.786864 2758 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:58:03.804127 kubelet[2758]: I1213 13:58:03.804073 2758 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:58:03.806085 kubelet[2758]: I1213 13:58:03.806006 2758 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:58:03.809281 kubelet[2758]: I1213 13:58:03.808962 2758 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:58:03.809281 kubelet[2758]: I1213 13:58:03.809055 2758 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:58:03.809281 kubelet[2758]: I1213 13:58:03.809074 2758 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:58:03.810594 kubelet[2758]: I1213 13:58:03.810568 2758 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:58:03.810969 kubelet[2758]: I1213 13:58:03.810934 2758 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:58:03.811140 kubelet[2758]: I1213 13:58:03.811104 2758 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:58:03.811342 kubelet[2758]: I1213 13:58:03.811323 2758 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:58:03.811474 kubelet[2758]: I1213 13:58:03.811454 2758 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:58:03.816542 kubelet[2758]: I1213 13:58:03.816506 2758 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:58:03.818323 kubelet[2758]: I1213 13:58:03.816811 2758 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:58:03.818901 kubelet[2758]: I1213 13:58:03.818870 2758 server.go:1256] "Started kubelet" Dec 13 13:58:03.820051 kubelet[2758]: I1213 13:58:03.820026 2758 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:58:03.821211 kubelet[2758]: I1213 13:58:03.821178 2758 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:58:03.821715 kubelet[2758]: I1213 13:58:03.821689 2758 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:58:03.823185 kubelet[2758]: I1213 13:58:03.821703 2758 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:58:03.825269 kubelet[2758]: I1213 13:58:03.825245 2758 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:58:03.842153 kubelet[2758]: I1213 13:58:03.842086 2758 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:58:03.857077 kubelet[2758]: I1213 13:58:03.857032 2758 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:58:03.857621 kubelet[2758]: I1213 13:58:03.857598 2758 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:58:03.862074 kubelet[2758]: I1213 13:58:03.862049 2758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:58:03.864857 kubelet[2758]: I1213 13:58:03.864834 2758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:58:03.865015 kubelet[2758]: I1213 13:58:03.864994 2758 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:58:03.865156 kubelet[2758]: I1213 13:58:03.865135 2758 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:58:03.865417 kubelet[2758]: E1213 13:58:03.865397 2758 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:58:03.890808 kubelet[2758]: I1213 13:58:03.890655 2758 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:58:03.894734 kubelet[2758]: I1213 13:58:03.894404 2758 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:58:03.906662 kubelet[2758]: E1213 13:58:03.905590 2758 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:58:03.906662 kubelet[2758]: I1213 13:58:03.905979 2758 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:58:03.965960 kubelet[2758]: E1213 13:58:03.965731 2758 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:58:03.973352 kubelet[2758]: I1213 13:58:03.973058 2758 kubelet_node_status.go:73] "Attempting to register node" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.000016 kubelet[2758]: I1213 13:58:03.999832 2758 kubelet_node_status.go:112] "Node was previously registered" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.000640 kubelet[2758]: I1213 13:58:04.000372 2758 kubelet_node_status.go:76] "Successfully registered node" node="srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.050978 kubelet[2758]: I1213 13:58:04.050470 2758 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:58:04.050978 kubelet[2758]: I1213 13:58:04.050513 2758 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:58:04.050978 kubelet[2758]: I1213 13:58:04.050553 2758 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:58:04.050978 kubelet[2758]: I1213 13:58:04.050825 2758 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:58:04.050978 kubelet[2758]: I1213 13:58:04.050876 2758 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:58:04.050978 kubelet[2758]: I1213 13:58:04.050901 2758 policy_none.go:49] "None policy: Start" Dec 13 13:58:04.053182 kubelet[2758]: I1213 13:58:04.052726 2758 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:58:04.053182 kubelet[2758]: I1213 13:58:04.052775 2758 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:58:04.053182 kubelet[2758]: I1213 13:58:04.053024 2758 state_mem.go:75] "Updated machine memory state" Dec 13 13:58:04.063794 kubelet[2758]: I1213 13:58:04.063370 2758 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:58:04.064410 kubelet[2758]: I1213 13:58:04.064381 2758 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:58:04.167139 kubelet[2758]: I1213 13:58:04.166949 2758 topology_manager.go:215] "Topology Admit Handler" podUID="fd86c6dd2616273d8f441df16a16fc64" podNamespace="kube-system" podName="kube-apiserver-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.167426 kubelet[2758]: I1213 13:58:04.167191 2758 topology_manager.go:215] "Topology Admit Handler" podUID="a7bd20a56264e482813a3c8830415882" podNamespace="kube-system" podName="kube-controller-manager-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.167426 kubelet[2758]: I1213 13:58:04.167278 2758 topology_manager.go:215] "Topology Admit Handler" podUID="6110810b160ee4fa74a9e62e8795f98e" podNamespace="kube-system" podName="kube-scheduler-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.175782 kubelet[2758]: W1213 13:58:04.175700 2758 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:58:04.178437 kubelet[2758]: W1213 13:58:04.178266 2758 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:58:04.178437 kubelet[2758]: E1213 13:58:04.178372 2758 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-3exgq.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.179040 kubelet[2758]: W1213 13:58:04.179010 2758 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:58:04.265865 kubelet[2758]: I1213 13:58:04.265813 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd86c6dd2616273d8f441df16a16fc64-k8s-certs\") pod \"kube-apiserver-srv-3exgq.gb1.brightbox.com\" (UID: \"fd86c6dd2616273d8f441df16a16fc64\") " pod="kube-system/kube-apiserver-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.266076 kubelet[2758]: I1213 13:58:04.265884 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd86c6dd2616273d8f441df16a16fc64-usr-share-ca-certificates\") pod \"kube-apiserver-srv-3exgq.gb1.brightbox.com\" (UID: \"fd86c6dd2616273d8f441df16a16fc64\") " pod="kube-system/kube-apiserver-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.266076 kubelet[2758]: I1213 13:58:04.265928 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7bd20a56264e482813a3c8830415882-k8s-certs\") pod \"kube-controller-manager-srv-3exgq.gb1.brightbox.com\" (UID: \"a7bd20a56264e482813a3c8830415882\") " pod="kube-system/kube-controller-manager-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.266949 kubelet[2758]: I1213 13:58:04.266922 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7bd20a56264e482813a3c8830415882-kubeconfig\") pod \"kube-controller-manager-srv-3exgq.gb1.brightbox.com\" (UID: \"a7bd20a56264e482813a3c8830415882\") " pod="kube-system/kube-controller-manager-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.267131 kubelet[2758]: I1213 13:58:04.267090 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7bd20a56264e482813a3c8830415882-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-3exgq.gb1.brightbox.com\" (UID: \"a7bd20a56264e482813a3c8830415882\") " pod="kube-system/kube-controller-manager-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.267197 kubelet[2758]: I1213 13:58:04.267156 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6110810b160ee4fa74a9e62e8795f98e-kubeconfig\") pod \"kube-scheduler-srv-3exgq.gb1.brightbox.com\" (UID: \"6110810b160ee4fa74a9e62e8795f98e\") " pod="kube-system/kube-scheduler-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.267197 kubelet[2758]: I1213 13:58:04.267190 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd86c6dd2616273d8f441df16a16fc64-ca-certs\") pod \"kube-apiserver-srv-3exgq.gb1.brightbox.com\" (UID: \"fd86c6dd2616273d8f441df16a16fc64\") " pod="kube-system/kube-apiserver-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.267352 kubelet[2758]: I1213 13:58:04.267221 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a7bd20a56264e482813a3c8830415882-flexvolume-dir\") pod \"kube-controller-manager-srv-3exgq.gb1.brightbox.com\" (UID: \"a7bd20a56264e482813a3c8830415882\") " pod="kube-system/kube-controller-manager-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.267352 kubelet[2758]: I1213 13:58:04.267251 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7bd20a56264e482813a3c8830415882-ca-certs\") pod \"kube-controller-manager-srv-3exgq.gb1.brightbox.com\" (UID: \"a7bd20a56264e482813a3c8830415882\") " pod="kube-system/kube-controller-manager-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:04.684138 sudo[2770]: pam_unix(sudo:session): session closed for user root Dec 13 13:58:04.820085 kubelet[2758]: I1213 13:58:04.819958 2758 apiserver.go:52] "Watching apiserver" Dec 13 13:58:04.858525 kubelet[2758]: I1213 13:58:04.858338 2758 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:58:04.975527 kubelet[2758]: W1213 13:58:04.975351 2758 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:58:04.975527 kubelet[2758]: E1213 13:58:04.975429 2758 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-3exgq.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-3exgq.gb1.brightbox.com" Dec 13 13:58:05.027866 kubelet[2758]: I1213 13:58:05.027770 2758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-3exgq.gb1.brightbox.com" podStartSLOduration=1.02769147 podStartE2EDuration="1.02769147s" podCreationTimestamp="2024-12-13 13:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:58:05.012684004 +0000 UTC m=+1.344619121" watchObservedRunningTime="2024-12-13 13:58:05.02769147 +0000 UTC m=+1.359626586" Dec 13 13:58:05.052315 kubelet[2758]: I1213 13:58:05.051535 2758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-3exgq.gb1.brightbox.com" podStartSLOduration=1.051460735 podStartE2EDuration="1.051460735s" podCreationTimestamp="2024-12-13 13:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:58:05.049942604 +0000 UTC m=+1.381877724" watchObservedRunningTime="2024-12-13 13:58:05.051460735 +0000 UTC m=+1.383395854" Dec 13 13:58:05.052315 kubelet[2758]: I1213 13:58:05.052068 2758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-3exgq.gb1.brightbox.com" podStartSLOduration=2.051721815 podStartE2EDuration="2.051721815s" podCreationTimestamp="2024-12-13 13:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:58:05.028959488 +0000 UTC m=+1.360894602" watchObservedRunningTime="2024-12-13 13:58:05.051721815 +0000 UTC m=+1.383656936" Dec 13 13:58:06.445511 sudo[1748]: pam_unix(sudo:session): session closed for user root Dec 13 13:58:06.589201 sshd[1747]: Connection closed by 139.178.68.195 port 54652 Dec 13 13:58:06.591063 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Dec 13 13:58:06.599764 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:58:06.601462 systemd[1]: sshd@6-10.244.15.30:22-139.178.68.195:54652.service: Deactivated successfully. Dec 13 13:58:06.606063 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:58:06.606553 systemd[1]: session-9.scope: Consumed 7.081s CPU time, 181.8M memory peak, 0B memory swap peak. Dec 13 13:58:06.608586 systemd-logind[1486]: Removed session 9. Dec 13 13:58:17.315542 kubelet[2758]: I1213 13:58:17.315002 2758 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:58:17.318058 containerd[1508]: time="2024-12-13T13:58:17.317443207Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:58:17.319719 kubelet[2758]: I1213 13:58:17.318368 2758 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:58:18.078242 kubelet[2758]: I1213 13:58:18.077274 2758 topology_manager.go:215] "Topology Admit Handler" podUID="a04b508c-51e1-429a-abf7-9ca7ba12a84e" podNamespace="kube-system" podName="kube-proxy-wqrf9" Dec 13 13:58:18.089677 kubelet[2758]: I1213 13:58:18.089256 2758 topology_manager.go:215] "Topology Admit Handler" podUID="9737dbab-2dd5-4c28-9499-7b32a40f1ac5" podNamespace="kube-system" podName="cilium-p7w7t" Dec 13 13:58:18.114688 systemd[1]: Created slice kubepods-besteffort-poda04b508c_51e1_429a_abf7_9ca7ba12a84e.slice - libcontainer container kubepods-besteffort-poda04b508c_51e1_429a_abf7_9ca7ba12a84e.slice. Dec 13 13:58:18.123861 systemd[1]: Created slice kubepods-burstable-pod9737dbab_2dd5_4c28_9499_7b32a40f1ac5.slice - libcontainer container kubepods-burstable-pod9737dbab_2dd5_4c28_9499_7b32a40f1ac5.slice. Dec 13 13:58:18.156781 kubelet[2758]: I1213 13:58:18.156721 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwkb7\" (UniqueName: \"kubernetes.io/projected/a04b508c-51e1-429a-abf7-9ca7ba12a84e-kube-api-access-dwkb7\") pod \"kube-proxy-wqrf9\" (UID: \"a04b508c-51e1-429a-abf7-9ca7ba12a84e\") " pod="kube-system/kube-proxy-wqrf9" Dec 13 13:58:18.156781 kubelet[2758]: I1213 13:58:18.156792 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-bpf-maps\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.157930 kubelet[2758]: I1213 13:58:18.156838 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cilium-cgroup\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.157930 kubelet[2758]: I1213 13:58:18.156877 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-host-proc-sys-kernel\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.157930 kubelet[2758]: I1213 13:58:18.156909 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a04b508c-51e1-429a-abf7-9ca7ba12a84e-xtables-lock\") pod \"kube-proxy-wqrf9\" (UID: \"a04b508c-51e1-429a-abf7-9ca7ba12a84e\") " pod="kube-system/kube-proxy-wqrf9" Dec 13 13:58:18.157930 kubelet[2758]: I1213 13:58:18.156940 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cilium-config-path\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.157930 kubelet[2758]: I1213 13:58:18.156983 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a04b508c-51e1-429a-abf7-9ca7ba12a84e-lib-modules\") pod \"kube-proxy-wqrf9\" (UID: \"a04b508c-51e1-429a-abf7-9ca7ba12a84e\") " pod="kube-system/kube-proxy-wqrf9" Dec 13 13:58:18.158189 kubelet[2758]: I1213 13:58:18.157015 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-lib-modules\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.158189 kubelet[2758]: I1213 13:58:18.157054 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-hubble-tls\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.158189 kubelet[2758]: I1213 13:58:18.157178 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cni-path\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.158189 kubelet[2758]: I1213 13:58:18.157226 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvc5z\" (UniqueName: \"kubernetes.io/projected/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-kube-api-access-jvc5z\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.158189 kubelet[2758]: I1213 13:58:18.157259 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a04b508c-51e1-429a-abf7-9ca7ba12a84e-kube-proxy\") pod \"kube-proxy-wqrf9\" (UID: \"a04b508c-51e1-429a-abf7-9ca7ba12a84e\") " pod="kube-system/kube-proxy-wqrf9" Dec 13 13:58:18.158189 kubelet[2758]: I1213 13:58:18.157348 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cilium-run\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.158496 kubelet[2758]: I1213 13:58:18.157382 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-xtables-lock\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.158496 kubelet[2758]: I1213 13:58:18.157423 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-clustermesh-secrets\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.158496 kubelet[2758]: I1213 13:58:18.157521 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-hostproc\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.158496 kubelet[2758]: I1213 13:58:18.157555 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-etc-cni-netd\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.158496 kubelet[2758]: I1213 13:58:18.157629 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-host-proc-sys-net\") pod \"cilium-p7w7t\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " pod="kube-system/cilium-p7w7t" Dec 13 13:58:18.290594 kubelet[2758]: E1213 13:58:18.289626 2758 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:58:18.290594 kubelet[2758]: E1213 13:58:18.290361 2758 projected.go:200] Error preparing data for projected volume kube-api-access-jvc5z for pod kube-system/cilium-p7w7t: configmap "kube-root-ca.crt" not found Dec 13 13:58:18.290594 kubelet[2758]: E1213 13:58:18.290488 2758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-kube-api-access-jvc5z podName:9737dbab-2dd5-4c28-9499-7b32a40f1ac5 nodeName:}" failed. No retries permitted until 2024-12-13 13:58:18.790444524 +0000 UTC m=+15.122379632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jvc5z" (UniqueName: "kubernetes.io/projected/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-kube-api-access-jvc5z") pod "cilium-p7w7t" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5") : configmap "kube-root-ca.crt" not found Dec 13 13:58:18.291170 kubelet[2758]: E1213 13:58:18.290663 2758 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:58:18.291170 kubelet[2758]: E1213 13:58:18.290697 2758 projected.go:200] Error preparing data for projected volume kube-api-access-dwkb7 for pod kube-system/kube-proxy-wqrf9: configmap "kube-root-ca.crt" not found Dec 13 13:58:18.291170 kubelet[2758]: E1213 13:58:18.290749 2758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a04b508c-51e1-429a-abf7-9ca7ba12a84e-kube-api-access-dwkb7 podName:a04b508c-51e1-429a-abf7-9ca7ba12a84e nodeName:}" failed. No retries permitted until 2024-12-13 13:58:18.790724625 +0000 UTC m=+15.122659738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dwkb7" (UniqueName: "kubernetes.io/projected/a04b508c-51e1-429a-abf7-9ca7ba12a84e-kube-api-access-dwkb7") pod "kube-proxy-wqrf9" (UID: "a04b508c-51e1-429a-abf7-9ca7ba12a84e") : configmap "kube-root-ca.crt" not found Dec 13 13:58:18.455337 kubelet[2758]: I1213 13:58:18.455267 2758 topology_manager.go:215] "Topology Admit Handler" podUID="7d8e0e9b-b144-4e04-9e2c-1198c1ae9000" podNamespace="kube-system" podName="cilium-operator-5cc964979-rwclp" Dec 13 13:58:18.473242 systemd[1]: Created slice kubepods-besteffort-pod7d8e0e9b_b144_4e04_9e2c_1198c1ae9000.slice - libcontainer container kubepods-besteffort-pod7d8e0e9b_b144_4e04_9e2c_1198c1ae9000.slice. Dec 13 13:58:18.561386 kubelet[2758]: I1213 13:58:18.561282 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmg95\" (UniqueName: \"kubernetes.io/projected/7d8e0e9b-b144-4e04-9e2c-1198c1ae9000-kube-api-access-rmg95\") pod \"cilium-operator-5cc964979-rwclp\" (UID: \"7d8e0e9b-b144-4e04-9e2c-1198c1ae9000\") " pod="kube-system/cilium-operator-5cc964979-rwclp" Dec 13 13:58:18.561677 kubelet[2758]: I1213 13:58:18.561515 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d8e0e9b-b144-4e04-9e2c-1198c1ae9000-cilium-config-path\") pod \"cilium-operator-5cc964979-rwclp\" (UID: \"7d8e0e9b-b144-4e04-9e2c-1198c1ae9000\") " pod="kube-system/cilium-operator-5cc964979-rwclp" Dec 13 13:58:18.780549 containerd[1508]: time="2024-12-13T13:58:18.779542216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-rwclp,Uid:7d8e0e9b-b144-4e04-9e2c-1198c1ae9000,Namespace:kube-system,Attempt:0,}" Dec 13 13:58:18.826683 containerd[1508]: time="2024-12-13T13:58:18.826529860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:58:18.827972 containerd[1508]: time="2024-12-13T13:58:18.827662736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:58:18.827972 containerd[1508]: time="2024-12-13T13:58:18.827760473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:58:18.828140 containerd[1508]: time="2024-12-13T13:58:18.827934789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:58:18.863636 systemd[1]: Started cri-containerd-777b1af8a1cb1c2512ab25effbb4436596ac09b251409d3b85ec49dd5c477b86.scope - libcontainer container 777b1af8a1cb1c2512ab25effbb4436596ac09b251409d3b85ec49dd5c477b86. Dec 13 13:58:18.935009 containerd[1508]: time="2024-12-13T13:58:18.934948267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-rwclp,Uid:7d8e0e9b-b144-4e04-9e2c-1198c1ae9000,Namespace:kube-system,Attempt:0,} returns sandbox id \"777b1af8a1cb1c2512ab25effbb4436596ac09b251409d3b85ec49dd5c477b86\"" Dec 13 13:58:18.940068 containerd[1508]: time="2024-12-13T13:58:18.940027569Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 13:58:19.035374 containerd[1508]: time="2024-12-13T13:58:19.035145146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wqrf9,Uid:a04b508c-51e1-429a-abf7-9ca7ba12a84e,Namespace:kube-system,Attempt:0,}" Dec 13 13:58:19.037333 containerd[1508]: time="2024-12-13T13:58:19.037231210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p7w7t,Uid:9737dbab-2dd5-4c28-9499-7b32a40f1ac5,Namespace:kube-system,Attempt:0,}" Dec 13 13:58:19.080396 containerd[1508]: time="2024-12-13T13:58:19.079611666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:58:19.080396 containerd[1508]: time="2024-12-13T13:58:19.080028046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:58:19.080396 containerd[1508]: time="2024-12-13T13:58:19.080207871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:58:19.083220 containerd[1508]: time="2024-12-13T13:58:19.082665144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:58:19.084852 containerd[1508]: time="2024-12-13T13:58:19.084418790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:58:19.084852 containerd[1508]: time="2024-12-13T13:58:19.084602446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:58:19.084852 containerd[1508]: time="2024-12-13T13:58:19.084621715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:58:19.085602 containerd[1508]: time="2024-12-13T13:58:19.085479639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:58:19.118500 systemd[1]: Started cri-containerd-3754239e4e1e77bf7d666f1d81bf97696e118ece3b0eaa846c71254c5874da47.scope - libcontainer container 3754239e4e1e77bf7d666f1d81bf97696e118ece3b0eaa846c71254c5874da47. Dec 13 13:58:19.122114 systemd[1]: Started cri-containerd-af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d.scope - libcontainer container af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d. Dec 13 13:58:19.176263 containerd[1508]: time="2024-12-13T13:58:19.175693104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p7w7t,Uid:9737dbab-2dd5-4c28-9499-7b32a40f1ac5,Namespace:kube-system,Attempt:0,} returns sandbox id \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\"" Dec 13 13:58:19.184642 containerd[1508]: time="2024-12-13T13:58:19.184499730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wqrf9,Uid:a04b508c-51e1-429a-abf7-9ca7ba12a84e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3754239e4e1e77bf7d666f1d81bf97696e118ece3b0eaa846c71254c5874da47\"" Dec 13 13:58:19.191413 containerd[1508]: time="2024-12-13T13:58:19.191269611Z" level=info msg="CreateContainer within sandbox \"3754239e4e1e77bf7d666f1d81bf97696e118ece3b0eaa846c71254c5874da47\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:58:19.210531 containerd[1508]: time="2024-12-13T13:58:19.210455426Z" level=info msg="CreateContainer within sandbox \"3754239e4e1e77bf7d666f1d81bf97696e118ece3b0eaa846c71254c5874da47\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b0ba1cf080bd0e56a070063459904ead4e1c66343f254f663f9d54e74a47f6ff\"" Dec 13 13:58:19.212771 containerd[1508]: time="2024-12-13T13:58:19.212534531Z" level=info msg="StartContainer for \"b0ba1cf080bd0e56a070063459904ead4e1c66343f254f663f9d54e74a47f6ff\"" Dec 13 13:58:19.269656 systemd[1]: Started cri-containerd-b0ba1cf080bd0e56a070063459904ead4e1c66343f254f663f9d54e74a47f6ff.scope - libcontainer container b0ba1cf080bd0e56a070063459904ead4e1c66343f254f663f9d54e74a47f6ff. Dec 13 13:58:19.333693 containerd[1508]: time="2024-12-13T13:58:19.333416990Z" level=info msg="StartContainer for \"b0ba1cf080bd0e56a070063459904ead4e1c66343f254f663f9d54e74a47f6ff\" returns successfully" Dec 13 13:58:20.018808 kubelet[2758]: I1213 13:58:20.018724 2758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wqrf9" podStartSLOduration=2.0185698 podStartE2EDuration="2.0185698s" podCreationTimestamp="2024-12-13 13:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:58:20.01731246 +0000 UTC m=+16.349247592" watchObservedRunningTime="2024-12-13 13:58:20.0185698 +0000 UTC m=+16.350504921" Dec 13 13:58:20.896271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2154272300.mount: Deactivated successfully. Dec 13 13:58:23.227775 containerd[1508]: time="2024-12-13T13:58:23.227680830Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:58:23.229108 containerd[1508]: time="2024-12-13T13:58:23.228917181Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907229" Dec 13 13:58:23.230110 containerd[1508]: time="2024-12-13T13:58:23.229712796Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:58:23.233073 containerd[1508]: time="2024-12-13T13:58:23.233036190Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.292931523s" Dec 13 13:58:23.233284 containerd[1508]: time="2024-12-13T13:58:23.233220384Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 13:58:23.238795 containerd[1508]: time="2024-12-13T13:58:23.236206887Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 13:58:23.239279 containerd[1508]: time="2024-12-13T13:58:23.239244467Z" level=info msg="CreateContainer within sandbox \"777b1af8a1cb1c2512ab25effbb4436596ac09b251409d3b85ec49dd5c477b86\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 13:58:23.261510 containerd[1508]: time="2024-12-13T13:58:23.261458959Z" level=info msg="CreateContainer within sandbox \"777b1af8a1cb1c2512ab25effbb4436596ac09b251409d3b85ec49dd5c477b86\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\"" Dec 13 13:58:23.265046 containerd[1508]: time="2024-12-13T13:58:23.265007288Z" level=info msg="StartContainer for \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\"" Dec 13 13:58:23.320511 systemd[1]: Started cri-containerd-9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead.scope - libcontainer container 9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead. Dec 13 13:58:23.362109 containerd[1508]: time="2024-12-13T13:58:23.359726567Z" level=info msg="StartContainer for \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\" returns successfully" Dec 13 13:58:31.619125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3985103965.mount: Deactivated successfully. Dec 13 13:58:35.255666 containerd[1508]: time="2024-12-13T13:58:35.255542788Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:58:35.257443 containerd[1508]: time="2024-12-13T13:58:35.257360699Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735311" Dec 13 13:58:35.258328 containerd[1508]: time="2024-12-13T13:58:35.258165612Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:58:35.273773 containerd[1508]: time="2024-12-13T13:58:35.273719379Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.035660957s" Dec 13 13:58:35.273908 containerd[1508]: time="2024-12-13T13:58:35.273784186Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 13:58:35.282177 containerd[1508]: time="2024-12-13T13:58:35.282114344Z" level=info msg="CreateContainer within sandbox \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:58:35.379649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3204523272.mount: Deactivated successfully. Dec 13 13:58:35.391782 containerd[1508]: time="2024-12-13T13:58:35.391723445Z" level=info msg="CreateContainer within sandbox \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2\"" Dec 13 13:58:35.394338 containerd[1508]: time="2024-12-13T13:58:35.393173134Z" level=info msg="StartContainer for \"50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2\"" Dec 13 13:58:35.685686 systemd[1]: Started cri-containerd-50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2.scope - libcontainer container 50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2. Dec 13 13:58:35.733042 containerd[1508]: time="2024-12-13T13:58:35.732807616Z" level=info msg="StartContainer for \"50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2\" returns successfully" Dec 13 13:58:35.767329 systemd[1]: cri-containerd-50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2.scope: Deactivated successfully. Dec 13 13:58:35.912003 containerd[1508]: time="2024-12-13T13:58:35.905872284Z" level=info msg="shim disconnected" id=50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2 namespace=k8s.io Dec 13 13:58:35.912003 containerd[1508]: time="2024-12-13T13:58:35.911988234Z" level=warning msg="cleaning up after shim disconnected" id=50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2 namespace=k8s.io Dec 13 13:58:35.912003 containerd[1508]: time="2024-12-13T13:58:35.912013091Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:58:35.930612 containerd[1508]: time="2024-12-13T13:58:35.930339059Z" level=warning msg="cleanup warnings time=\"2024-12-13T13:58:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 13:58:36.092075 containerd[1508]: time="2024-12-13T13:58:36.091628125Z" level=info msg="CreateContainer within sandbox \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:58:36.114181 containerd[1508]: time="2024-12-13T13:58:36.114127501Z" level=info msg="CreateContainer within sandbox \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348\"" Dec 13 13:58:36.117141 containerd[1508]: time="2024-12-13T13:58:36.116213227Z" level=info msg="StartContainer for \"baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348\"" Dec 13 13:58:36.129763 kubelet[2758]: I1213 13:58:36.129076 2758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-rwclp" podStartSLOduration=13.832296239 podStartE2EDuration="18.128961359s" podCreationTimestamp="2024-12-13 13:58:18 +0000 UTC" firstStartedPulling="2024-12-13 13:58:18.937605345 +0000 UTC m=+15.269540452" lastFinishedPulling="2024-12-13 13:58:23.234270454 +0000 UTC m=+19.566205572" observedRunningTime="2024-12-13 13:58:24.374578353 +0000 UTC m=+20.706513505" watchObservedRunningTime="2024-12-13 13:58:36.128961359 +0000 UTC m=+32.460896466" Dec 13 13:58:36.159563 systemd[1]: Started cri-containerd-baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348.scope - libcontainer container baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348. Dec 13 13:58:36.202514 containerd[1508]: time="2024-12-13T13:58:36.202435988Z" level=info msg="StartContainer for \"baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348\" returns successfully" Dec 13 13:58:36.229247 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:58:36.230391 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:58:36.230565 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:58:36.236722 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:58:36.237081 systemd[1]: cri-containerd-baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348.scope: Deactivated successfully. Dec 13 13:58:36.273940 containerd[1508]: time="2024-12-13T13:58:36.273865121Z" level=info msg="shim disconnected" id=baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348 namespace=k8s.io Dec 13 13:58:36.273940 containerd[1508]: time="2024-12-13T13:58:36.273938065Z" level=warning msg="cleaning up after shim disconnected" id=baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348 namespace=k8s.io Dec 13 13:58:36.275405 containerd[1508]: time="2024-12-13T13:58:36.273955359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:58:36.289551 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:58:36.370902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2-rootfs.mount: Deactivated successfully. Dec 13 13:58:37.108756 containerd[1508]: time="2024-12-13T13:58:37.108581256Z" level=info msg="CreateContainer within sandbox \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:58:37.141329 containerd[1508]: time="2024-12-13T13:58:37.141159649Z" level=info msg="CreateContainer within sandbox \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2\"" Dec 13 13:58:37.142240 containerd[1508]: time="2024-12-13T13:58:37.141776326Z" level=info msg="StartContainer for \"d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2\"" Dec 13 13:58:37.194559 systemd[1]: Started cri-containerd-d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2.scope - libcontainer container d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2. Dec 13 13:58:37.246351 containerd[1508]: time="2024-12-13T13:58:37.246108219Z" level=info msg="StartContainer for \"d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2\" returns successfully" Dec 13 13:58:37.253451 systemd[1]: cri-containerd-d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2.scope: Deactivated successfully. Dec 13 13:58:37.284936 containerd[1508]: time="2024-12-13T13:58:37.284855494Z" level=info msg="shim disconnected" id=d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2 namespace=k8s.io Dec 13 13:58:37.285500 containerd[1508]: time="2024-12-13T13:58:37.284990174Z" level=warning msg="cleaning up after shim disconnected" id=d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2 namespace=k8s.io Dec 13 13:58:37.285500 containerd[1508]: time="2024-12-13T13:58:37.285028018Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:58:37.370912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2-rootfs.mount: Deactivated successfully. Dec 13 13:58:38.110485 containerd[1508]: time="2024-12-13T13:58:38.110420983Z" level=info msg="CreateContainer within sandbox \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:58:38.126546 containerd[1508]: time="2024-12-13T13:58:38.125645467Z" level=info msg="CreateContainer within sandbox \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98\"" Dec 13 13:58:38.128603 containerd[1508]: time="2024-12-13T13:58:38.127456807Z" level=info msg="StartContainer for \"c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98\"" Dec 13 13:58:38.186664 systemd[1]: Started cri-containerd-c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98.scope - libcontainer container c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98. Dec 13 13:58:38.222358 systemd[1]: cri-containerd-c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98.scope: Deactivated successfully. Dec 13 13:58:38.228652 containerd[1508]: time="2024-12-13T13:58:38.227562634Z" level=info msg="StartContainer for \"c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98\" returns successfully" Dec 13 13:58:38.232449 containerd[1508]: time="2024-12-13T13:58:38.228517606Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9737dbab_2dd5_4c28_9499_7b32a40f1ac5.slice/cri-containerd-c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98.scope/memory.events\": no such file or directory" Dec 13 13:58:38.254616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98-rootfs.mount: Deactivated successfully. Dec 13 13:58:38.257369 containerd[1508]: time="2024-12-13T13:58:38.257266832Z" level=info msg="shim disconnected" id=c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98 namespace=k8s.io Dec 13 13:58:38.257495 containerd[1508]: time="2024-12-13T13:58:38.257373808Z" level=warning msg="cleaning up after shim disconnected" id=c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98 namespace=k8s.io Dec 13 13:58:38.257495 containerd[1508]: time="2024-12-13T13:58:38.257390639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:58:39.117450 containerd[1508]: time="2024-12-13T13:58:39.116964118Z" level=info msg="CreateContainer within sandbox \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:58:39.145167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341921028.mount: Deactivated successfully. Dec 13 13:58:39.149873 containerd[1508]: time="2024-12-13T13:58:39.149660219Z" level=info msg="CreateContainer within sandbox \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\"" Dec 13 13:58:39.153233 containerd[1508]: time="2024-12-13T13:58:39.153144847Z" level=info msg="StartContainer for \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\"" Dec 13 13:58:39.216651 systemd[1]: Started cri-containerd-fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5.scope - libcontainer container fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5. Dec 13 13:58:39.275084 containerd[1508]: time="2024-12-13T13:58:39.274934004Z" level=info msg="StartContainer for \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\" returns successfully" Dec 13 13:58:39.560171 kubelet[2758]: I1213 13:58:39.559956 2758 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:58:39.611272 kubelet[2758]: I1213 13:58:39.611219 2758 topology_manager.go:215] "Topology Admit Handler" podUID="479f1e54-8b2d-447c-adce-b9b48913f9f0" podNamespace="kube-system" podName="coredns-76f75df574-wtjjx" Dec 13 13:58:39.619581 kubelet[2758]: I1213 13:58:39.617925 2758 topology_manager.go:215] "Topology Admit Handler" podUID="13e134aa-b1a1-4c6c-b63d-1a55052ae323" podNamespace="kube-system" podName="coredns-76f75df574-ktshc" Dec 13 13:58:39.633088 systemd[1]: Created slice kubepods-burstable-pod479f1e54_8b2d_447c_adce_b9b48913f9f0.slice - libcontainer container kubepods-burstable-pod479f1e54_8b2d_447c_adce_b9b48913f9f0.slice. Dec 13 13:58:39.647557 systemd[1]: Created slice kubepods-burstable-pod13e134aa_b1a1_4c6c_b63d_1a55052ae323.slice - libcontainer container kubepods-burstable-pod13e134aa_b1a1_4c6c_b63d_1a55052ae323.slice. Dec 13 13:58:39.739509 kubelet[2758]: I1213 13:58:39.739001 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13e134aa-b1a1-4c6c-b63d-1a55052ae323-config-volume\") pod \"coredns-76f75df574-ktshc\" (UID: \"13e134aa-b1a1-4c6c-b63d-1a55052ae323\") " pod="kube-system/coredns-76f75df574-ktshc" Dec 13 13:58:39.739509 kubelet[2758]: I1213 13:58:39.739326 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9mn6\" (UniqueName: \"kubernetes.io/projected/13e134aa-b1a1-4c6c-b63d-1a55052ae323-kube-api-access-s9mn6\") pod \"coredns-76f75df574-ktshc\" (UID: \"13e134aa-b1a1-4c6c-b63d-1a55052ae323\") " pod="kube-system/coredns-76f75df574-ktshc" Dec 13 13:58:39.739509 kubelet[2758]: I1213 13:58:39.739410 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/479f1e54-8b2d-447c-adce-b9b48913f9f0-config-volume\") pod \"coredns-76f75df574-wtjjx\" (UID: \"479f1e54-8b2d-447c-adce-b9b48913f9f0\") " pod="kube-system/coredns-76f75df574-wtjjx" Dec 13 13:58:39.739509 kubelet[2758]: I1213 13:58:39.739463 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6w76\" (UniqueName: \"kubernetes.io/projected/479f1e54-8b2d-447c-adce-b9b48913f9f0-kube-api-access-z6w76\") pod \"coredns-76f75df574-wtjjx\" (UID: \"479f1e54-8b2d-447c-adce-b9b48913f9f0\") " pod="kube-system/coredns-76f75df574-wtjjx" Dec 13 13:58:39.943569 containerd[1508]: time="2024-12-13T13:58:39.943442021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wtjjx,Uid:479f1e54-8b2d-447c-adce-b9b48913f9f0,Namespace:kube-system,Attempt:0,}" Dec 13 13:58:39.954667 containerd[1508]: time="2024-12-13T13:58:39.954031577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ktshc,Uid:13e134aa-b1a1-4c6c-b63d-1a55052ae323,Namespace:kube-system,Attempt:0,}" Dec 13 13:58:40.178617 kubelet[2758]: I1213 13:58:40.178191 2758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-p7w7t" podStartSLOduration=6.084990661 podStartE2EDuration="22.178040353s" podCreationTimestamp="2024-12-13 13:58:18 +0000 UTC" firstStartedPulling="2024-12-13 13:58:19.181094853 +0000 UTC m=+15.513029967" lastFinishedPulling="2024-12-13 13:58:35.27414455 +0000 UTC m=+31.606079659" observedRunningTime="2024-12-13 13:58:40.176138702 +0000 UTC m=+36.508073828" watchObservedRunningTime="2024-12-13 13:58:40.178040353 +0000 UTC m=+36.509975474" Dec 13 13:58:41.982095 systemd-networkd[1417]: cilium_host: Link UP Dec 13 13:58:41.983189 systemd-networkd[1417]: cilium_net: Link UP Dec 13 13:58:41.983598 systemd-networkd[1417]: cilium_net: Gained carrier Dec 13 13:58:41.983903 systemd-networkd[1417]: cilium_host: Gained carrier Dec 13 13:58:41.984155 systemd-networkd[1417]: cilium_net: Gained IPv6LL Dec 13 13:58:41.985645 systemd-networkd[1417]: cilium_host: Gained IPv6LL Dec 13 13:58:42.154998 systemd-networkd[1417]: cilium_vxlan: Link UP Dec 13 13:58:42.155013 systemd-networkd[1417]: cilium_vxlan: Gained carrier Dec 13 13:58:42.748479 kernel: NET: Registered PF_ALG protocol family Dec 13 13:58:43.180193 systemd-networkd[1417]: cilium_vxlan: Gained IPv6LL Dec 13 13:58:43.828398 systemd-networkd[1417]: lxc_health: Link UP Dec 13 13:58:43.837334 systemd-networkd[1417]: lxc_health: Gained carrier Dec 13 13:58:44.065524 systemd-networkd[1417]: lxce1983724277b: Link UP Dec 13 13:58:44.072044 kernel: eth0: renamed from tmpbee3d Dec 13 13:58:44.082283 systemd-networkd[1417]: lxce1983724277b: Gained carrier Dec 13 13:58:44.098238 systemd-networkd[1417]: lxcd91d0d8bc78d: Link UP Dec 13 13:58:44.110231 kernel: eth0: renamed from tmp26d7e Dec 13 13:58:44.120753 systemd-networkd[1417]: lxcd91d0d8bc78d: Gained carrier Dec 13 13:58:45.099618 systemd-networkd[1417]: lxce1983724277b: Gained IPv6LL Dec 13 13:58:45.675751 systemd-networkd[1417]: lxc_health: Gained IPv6LL Dec 13 13:58:46.123582 systemd-networkd[1417]: lxcd91d0d8bc78d: Gained IPv6LL Dec 13 13:58:50.132775 containerd[1508]: time="2024-12-13T13:58:50.131219614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:58:50.132775 containerd[1508]: time="2024-12-13T13:58:50.131380381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:58:50.132775 containerd[1508]: time="2024-12-13T13:58:50.131403522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:58:50.140391 containerd[1508]: time="2024-12-13T13:58:50.140131400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:58:50.216626 systemd[1]: Started cri-containerd-bee3db235c1e08a60863aef440a4cb35a5a5ab3377041fecaed5651b65c35641.scope - libcontainer container bee3db235c1e08a60863aef440a4cb35a5a5ab3377041fecaed5651b65c35641. Dec 13 13:58:50.225492 containerd[1508]: time="2024-12-13T13:58:50.223599849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:58:50.225492 containerd[1508]: time="2024-12-13T13:58:50.223743369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:58:50.225492 containerd[1508]: time="2024-12-13T13:58:50.223770119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:58:50.225492 containerd[1508]: time="2024-12-13T13:58:50.223892888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:58:50.309511 systemd[1]: Started cri-containerd-26d7efdc5005bea78f17a51fc8865a19ac44099d8ba5aca10e61202b4fa9acaa.scope - libcontainer container 26d7efdc5005bea78f17a51fc8865a19ac44099d8ba5aca10e61202b4fa9acaa. Dec 13 13:58:50.409583 containerd[1508]: time="2024-12-13T13:58:50.409198827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wtjjx,Uid:479f1e54-8b2d-447c-adce-b9b48913f9f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bee3db235c1e08a60863aef440a4cb35a5a5ab3377041fecaed5651b65c35641\"" Dec 13 13:58:50.419068 containerd[1508]: time="2024-12-13T13:58:50.418891280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ktshc,Uid:13e134aa-b1a1-4c6c-b63d-1a55052ae323,Namespace:kube-system,Attempt:0,} returns sandbox id \"26d7efdc5005bea78f17a51fc8865a19ac44099d8ba5aca10e61202b4fa9acaa\"" Dec 13 13:58:50.420876 containerd[1508]: time="2024-12-13T13:58:50.420814187Z" level=info msg="CreateContainer within sandbox \"bee3db235c1e08a60863aef440a4cb35a5a5ab3377041fecaed5651b65c35641\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:58:50.424306 containerd[1508]: time="2024-12-13T13:58:50.424205461Z" level=info msg="CreateContainer within sandbox \"26d7efdc5005bea78f17a51fc8865a19ac44099d8ba5aca10e61202b4fa9acaa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:58:50.448971 containerd[1508]: time="2024-12-13T13:58:50.448778696Z" level=info msg="CreateContainer within sandbox \"26d7efdc5005bea78f17a51fc8865a19ac44099d8ba5aca10e61202b4fa9acaa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b97a0078a0289e13431d93a10e8d65192c0bb1550b740d774d18f3ae5e3b551\"" Dec 13 13:58:50.450439 containerd[1508]: time="2024-12-13T13:58:50.449453607Z" level=info msg="StartContainer for \"2b97a0078a0289e13431d93a10e8d65192c0bb1550b740d774d18f3ae5e3b551\"" Dec 13 13:58:50.454735 containerd[1508]: time="2024-12-13T13:58:50.454695207Z" level=info msg="CreateContainer within sandbox \"bee3db235c1e08a60863aef440a4cb35a5a5ab3377041fecaed5651b65c35641\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d1ceb6554bd36aaaf5cedc255f9f578465489c8fe29171371c33af933dfdf6e\"" Dec 13 13:58:50.455504 containerd[1508]: time="2024-12-13T13:58:50.455473067Z" level=info msg="StartContainer for \"1d1ceb6554bd36aaaf5cedc255f9f578465489c8fe29171371c33af933dfdf6e\"" Dec 13 13:58:50.508981 systemd[1]: Started cri-containerd-2b97a0078a0289e13431d93a10e8d65192c0bb1550b740d774d18f3ae5e3b551.scope - libcontainer container 2b97a0078a0289e13431d93a10e8d65192c0bb1550b740d774d18f3ae5e3b551. Dec 13 13:58:50.525400 systemd[1]: Started cri-containerd-1d1ceb6554bd36aaaf5cedc255f9f578465489c8fe29171371c33af933dfdf6e.scope - libcontainer container 1d1ceb6554bd36aaaf5cedc255f9f578465489c8fe29171371c33af933dfdf6e. Dec 13 13:58:50.578035 containerd[1508]: time="2024-12-13T13:58:50.577761547Z" level=info msg="StartContainer for \"2b97a0078a0289e13431d93a10e8d65192c0bb1550b740d774d18f3ae5e3b551\" returns successfully" Dec 13 13:58:50.590815 containerd[1508]: time="2024-12-13T13:58:50.590765503Z" level=info msg="StartContainer for \"1d1ceb6554bd36aaaf5cedc255f9f578465489c8fe29171371c33af933dfdf6e\" returns successfully" Dec 13 13:58:51.209249 kubelet[2758]: I1213 13:58:51.208013 2758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ktshc" podStartSLOduration=33.207887844 podStartE2EDuration="33.207887844s" podCreationTimestamp="2024-12-13 13:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:58:51.206731606 +0000 UTC m=+47.538666757" watchObservedRunningTime="2024-12-13 13:58:51.207887844 +0000 UTC m=+47.539822964" Dec 13 13:58:51.230434 kubelet[2758]: I1213 13:58:51.230392 2758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wtjjx" podStartSLOduration=33.230338479 podStartE2EDuration="33.230338479s" podCreationTimestamp="2024-12-13 13:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:58:51.228654626 +0000 UTC m=+47.560589752" watchObservedRunningTime="2024-12-13 13:58:51.230338479 +0000 UTC m=+47.562273600" Dec 13 13:58:58.130706 systemd[1]: Started sshd@7-10.244.15.30:22-218.92.0.236:41020.service - OpenSSH per-connection server daemon (218.92.0.236:41020). Dec 13 13:59:00.806049 sshd-session[4134]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.236 user=root Dec 13 13:59:02.667582 sshd[4132]: PAM: Permission denied for root from 218.92.0.236 Dec 13 13:59:03.191255 sshd-session[4135]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.236 user=root Dec 13 13:59:05.328223 sshd[4132]: PAM: Permission denied for root from 218.92.0.236 Dec 13 13:59:06.378849 sshd-session[4138]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.236 user=root Dec 13 13:59:08.064370 sshd[4132]: PAM: Permission denied for root from 218.92.0.236 Dec 13 13:59:08.325887 sshd[4132]: Received disconnect from 218.92.0.236 port 41020:11: [preauth] Dec 13 13:59:08.325887 sshd[4132]: Disconnected from authenticating user root 218.92.0.236 port 41020 [preauth] Dec 13 13:59:08.330442 systemd[1]: sshd@7-10.244.15.30:22-218.92.0.236:41020.service: Deactivated successfully. Dec 13 13:59:08.622656 systemd[1]: Started sshd@8-10.244.15.30:22-218.92.0.236:26540.service - OpenSSH per-connection server daemon (218.92.0.236:26540). Dec 13 13:59:15.439420 systemd[1]: Started sshd@9-10.244.15.30:22-139.178.68.195:36374.service - OpenSSH per-connection server daemon (139.178.68.195:36374). Dec 13 13:59:16.355347 sshd[4145]: Accepted publickey for core from 139.178.68.195 port 36374 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:59:16.358606 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:59:16.369642 systemd-logind[1486]: New session 10 of user core. Dec 13 13:59:16.381627 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:59:17.523734 sshd[4147]: Connection closed by 139.178.68.195 port 36374 Dec 13 13:59:17.524983 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Dec 13 13:59:17.533787 systemd[1]: sshd@9-10.244.15.30:22-139.178.68.195:36374.service: Deactivated successfully. Dec 13 13:59:17.536835 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:59:17.539988 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:59:17.541570 systemd-logind[1486]: Removed session 10. Dec 13 13:59:21.730269 sshd-session[4156]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.236 user=root Dec 13 13:59:22.708619 systemd[1]: Started sshd@10-10.244.15.30:22-139.178.68.195:54384.service - OpenSSH per-connection server daemon (139.178.68.195:54384). Dec 13 13:59:22.925056 sshd[4142]: PAM: Permission denied for root from 218.92.0.236 Dec 13 13:59:23.683913 sshd[4164]: Accepted publickey for core from 139.178.68.195 port 54384 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:59:23.686323 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:59:23.693480 systemd-logind[1486]: New session 11 of user core. Dec 13 13:59:23.703898 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:59:24.404774 sshd[4166]: Connection closed by 139.178.68.195 port 54384 Dec 13 13:59:24.403821 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Dec 13 13:59:24.408548 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:59:24.409170 systemd[1]: sshd@10-10.244.15.30:22-139.178.68.195:54384.service: Deactivated successfully. Dec 13 13:59:24.412790 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:59:24.414500 systemd-logind[1486]: Removed session 11. Dec 13 13:59:24.435507 sshd-session[4168]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.236 user=root Dec 13 13:59:25.925600 sshd[4142]: PAM: Permission denied for root from 218.92.0.236 Dec 13 13:59:28.607232 sshd-session[4178]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.236 user=root Dec 13 13:59:29.562645 systemd[1]: Started sshd@11-10.244.15.30:22-139.178.68.195:36466.service - OpenSSH per-connection server daemon (139.178.68.195:36466). Dec 13 13:59:30.494361 sshd[4180]: Accepted publickey for core from 139.178.68.195 port 36466 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:59:30.496931 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:59:30.506488 systemd-logind[1486]: New session 12 of user core. Dec 13 13:59:30.516567 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:59:30.980867 sshd[4142]: PAM: Permission denied for root from 218.92.0.236 Dec 13 13:59:31.232509 sshd[4182]: Connection closed by 139.178.68.195 port 36466 Dec 13 13:59:31.233714 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Dec 13 13:59:31.240194 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:59:31.241016 systemd[1]: sshd@11-10.244.15.30:22-139.178.68.195:36466.service: Deactivated successfully. Dec 13 13:59:31.244394 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:59:31.245968 systemd-logind[1486]: Removed session 12. Dec 13 13:59:31.256964 sshd[4142]: Received disconnect from 218.92.0.236 port 26540:11: [preauth] Dec 13 13:59:31.256964 sshd[4142]: Disconnected from authenticating user root 218.92.0.236 port 26540 [preauth] Dec 13 13:59:31.259979 systemd[1]: sshd@8-10.244.15.30:22-218.92.0.236:26540.service: Deactivated successfully. Dec 13 13:59:31.621811 systemd[1]: Started sshd@12-10.244.15.30:22-218.92.0.236:34480.service - OpenSSH per-connection server daemon (218.92.0.236:34480). Dec 13 13:59:36.023009 sshd-session[4197]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.236 user=root Dec 13 13:59:36.393675 systemd[1]: Started sshd@13-10.244.15.30:22-139.178.68.195:35218.service - OpenSSH per-connection server daemon (139.178.68.195:35218). Dec 13 13:59:37.305586 sshd[4199]: Accepted publickey for core from 139.178.68.195 port 35218 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:59:37.308020 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:59:37.316375 systemd-logind[1486]: New session 13 of user core. Dec 13 13:59:37.323537 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:59:37.357261 sshd[4195]: PAM: Permission denied for root from 218.92.0.236 Dec 13 13:59:38.006483 sshd-session[4202]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.236 user=root Dec 13 13:59:38.057417 sshd[4201]: Connection closed by 139.178.68.195 port 35218 Dec 13 13:59:38.058476 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Dec 13 13:59:38.067861 systemd[1]: sshd@13-10.244.15.30:22-139.178.68.195:35218.service: Deactivated successfully. Dec 13 13:59:38.070346 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:59:38.071475 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:59:38.073920 systemd-logind[1486]: Removed session 13. Dec 13 13:59:38.217698 systemd[1]: Started sshd@14-10.244.15.30:22-139.178.68.195:35222.service - OpenSSH per-connection server daemon (139.178.68.195:35222). Dec 13 13:59:39.132364 sshd[4214]: Accepted publickey for core from 139.178.68.195 port 35222 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:59:39.134396 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:59:39.144655 systemd-logind[1486]: New session 14 of user core. Dec 13 13:59:39.153565 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:59:39.924887 sshd[4216]: Connection closed by 139.178.68.195 port 35222 Dec 13 13:59:39.924106 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Dec 13 13:59:39.931553 systemd[1]: sshd@14-10.244.15.30:22-139.178.68.195:35222.service: Deactivated successfully. Dec 13 13:59:39.936066 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:59:39.937540 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:59:39.939240 systemd-logind[1486]: Removed session 14. Dec 13 13:59:39.947713 sshd[4195]: PAM: Permission denied for root from 218.92.0.236 Dec 13 13:59:40.083693 systemd[1]: Started sshd@15-10.244.15.30:22-139.178.68.195:35232.service - OpenSSH per-connection server daemon (139.178.68.195:35232). Dec 13 13:59:40.992285 sshd[4226]: Accepted publickey for core from 139.178.68.195 port 35232 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:59:40.995034 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:59:41.007455 systemd-logind[1486]: New session 15 of user core. Dec 13 13:59:41.018543 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:59:41.718098 sshd[4229]: Connection closed by 139.178.68.195 port 35232 Dec 13 13:59:41.719070 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Dec 13 13:59:41.723632 systemd[1]: sshd@15-10.244.15.30:22-139.178.68.195:35232.service: Deactivated successfully. Dec 13 13:59:41.727064 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:59:41.729475 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:59:41.730802 systemd-logind[1486]: Removed session 15. Dec 13 13:59:42.703228 sshd-session[4228]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.236 user=root Dec 13 13:59:44.908681 systemd[1]: Started sshd@16-10.244.15.30:22-218.92.0.223:61554.service - OpenSSH per-connection server daemon (218.92.0.223:61554). Dec 13 13:59:45.056276 sshd[4195]: PAM: Permission denied for root from 218.92.0.236 Dec 13 13:59:45.382349 sshd[4195]: Received disconnect from 218.92.0.236 port 34480:11: [preauth] Dec 13 13:59:45.382349 sshd[4195]: Disconnected from authenticating user root 218.92.0.236 port 34480 [preauth] Dec 13 13:59:45.384519 systemd[1]: sshd@12-10.244.15.30:22-218.92.0.236:34480.service: Deactivated successfully. Dec 13 13:59:46.879667 systemd[1]: Started sshd@17-10.244.15.30:22-139.178.68.195:36408.service - OpenSSH per-connection server daemon (139.178.68.195:36408). Dec 13 13:59:47.783079 sshd[4245]: Accepted publickey for core from 139.178.68.195 port 36408 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:59:47.785194 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:59:47.792272 systemd-logind[1486]: New session 16 of user core. Dec 13 13:59:47.801529 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:59:48.472952 sshd-session[4248]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 13:59:48.497091 sshd[4247]: Connection closed by 139.178.68.195 port 36408 Dec 13 13:59:48.498080 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Dec 13 13:59:48.503646 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:59:48.505021 systemd[1]: sshd@17-10.244.15.30:22-139.178.68.195:36408.service: Deactivated successfully. Dec 13 13:59:48.508458 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:59:48.510896 systemd-logind[1486]: Removed session 16. Dec 13 13:59:50.258895 sshd[4240]: PAM: Permission denied for root from 218.92.0.223 Dec 13 13:59:50.802804 sshd-session[4261]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 13:59:52.528753 sshd[4240]: PAM: Permission denied for root from 218.92.0.223 Dec 13 13:59:53.070878 sshd-session[4262]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 13:59:53.661708 systemd[1]: Started sshd@18-10.244.15.30:22-139.178.68.195:36424.service - OpenSSH per-connection server daemon (139.178.68.195:36424). Dec 13 13:59:54.564947 sshd[4264]: Accepted publickey for core from 139.178.68.195 port 36424 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:59:54.567071 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:59:54.574339 systemd-logind[1486]: New session 17 of user core. Dec 13 13:59:54.582144 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:59:55.279476 sshd[4266]: Connection closed by 139.178.68.195 port 36424 Dec 13 13:59:55.280236 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Dec 13 13:59:55.285865 systemd[1]: sshd@18-10.244.15.30:22-139.178.68.195:36424.service: Deactivated successfully. Dec 13 13:59:55.289304 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:59:55.291163 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:59:55.292881 systemd-logind[1486]: Removed session 17. Dec 13 13:59:55.401937 sshd[4240]: PAM: Permission denied for root from 218.92.0.223 Dec 13 13:59:55.441680 systemd[1]: Started sshd@19-10.244.15.30:22-139.178.68.195:36440.service - OpenSSH per-connection server daemon (139.178.68.195:36440). Dec 13 13:59:55.672611 sshd[4240]: Received disconnect from 218.92.0.223 port 61554:11: [preauth] Dec 13 13:59:55.672611 sshd[4240]: Disconnected from authenticating user root 218.92.0.223 port 61554 [preauth] Dec 13 13:59:55.674954 systemd[1]: sshd@16-10.244.15.30:22-218.92.0.223:61554.service: Deactivated successfully. Dec 13 13:59:55.993682 systemd[1]: Started sshd@20-10.244.15.30:22-218.92.0.223:46276.service - OpenSSH per-connection server daemon (218.92.0.223:46276). Dec 13 13:59:56.338498 sshd[4277]: Accepted publickey for core from 139.178.68.195 port 36440 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:59:56.340419 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:59:56.348689 systemd-logind[1486]: New session 18 of user core. Dec 13 13:59:56.357680 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:59:57.412369 sshd[4284]: Connection closed by 139.178.68.195 port 36440 Dec 13 13:59:57.415919 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Dec 13 13:59:57.424628 systemd[1]: sshd@19-10.244.15.30:22-139.178.68.195:36440.service: Deactivated successfully. Dec 13 13:59:57.427073 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:59:57.428447 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:59:57.430465 systemd-logind[1486]: Removed session 18. Dec 13 13:59:57.576733 systemd[1]: Started sshd@21-10.244.15.30:22-139.178.68.195:56422.service - OpenSSH per-connection server daemon (139.178.68.195:56422). Dec 13 13:59:58.073037 sshd-session[4295]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 13:59:58.480048 sshd[4293]: Accepted publickey for core from 139.178.68.195 port 56422 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:59:58.482143 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:59:58.488905 systemd-logind[1486]: New session 19 of user core. Dec 13 13:59:58.494476 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 14:00:00.094806 sshd[4282]: PAM: Permission denied for root from 218.92.0.223 Dec 13 14:00:00.671978 sshd-session[4304]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 14:00:01.445408 sshd[4296]: Connection closed by 139.178.68.195 port 56422 Dec 13 14:00:01.446744 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:01.453706 systemd[1]: sshd@21-10.244.15.30:22-139.178.68.195:56422.service: Deactivated successfully. Dec 13 14:00:01.458460 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:00:01.459978 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:00:01.461609 systemd-logind[1486]: Removed session 19. Dec 13 14:00:01.612711 systemd[1]: Started sshd@22-10.244.15.30:22-139.178.68.195:56436.service - OpenSSH per-connection server daemon (139.178.68.195:56436). Dec 13 14:00:02.436752 sshd[4282]: PAM: Permission denied for root from 218.92.0.223 Dec 13 14:00:02.533344 sshd[4313]: Accepted publickey for core from 139.178.68.195 port 56436 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:00:02.534755 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:00:02.541592 systemd-logind[1486]: New session 20 of user core. Dec 13 14:00:02.554598 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 14:00:03.012013 sshd-session[4316]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 14:00:03.523324 sshd[4315]: Connection closed by 139.178.68.195 port 56436 Dec 13 14:00:03.524452 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:03.529944 systemd[1]: sshd@22-10.244.15.30:22-139.178.68.195:56436.service: Deactivated successfully. Dec 13 14:00:03.533640 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:00:03.534951 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:00:03.537899 systemd-logind[1486]: Removed session 20. Dec 13 14:00:03.692670 systemd[1]: Started sshd@23-10.244.15.30:22-139.178.68.195:56444.service - OpenSSH per-connection server daemon (139.178.68.195:56444). Dec 13 14:00:04.602055 sshd[4325]: Accepted publickey for core from 139.178.68.195 port 56444 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:00:04.604596 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:00:04.615445 systemd-logind[1486]: New session 21 of user core. Dec 13 14:00:04.622509 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 14:00:04.718083 sshd[4282]: PAM: Permission denied for root from 218.92.0.223 Dec 13 14:00:05.006214 sshd[4282]: Received disconnect from 218.92.0.223 port 46276:11: [preauth] Dec 13 14:00:05.006214 sshd[4282]: Disconnected from authenticating user root 218.92.0.223 port 46276 [preauth] Dec 13 14:00:05.008752 systemd[1]: sshd@20-10.244.15.30:22-218.92.0.223:46276.service: Deactivated successfully. Dec 13 14:00:05.283766 systemd[1]: Started sshd@24-10.244.15.30:22-218.92.0.223:37580.service - OpenSSH per-connection server daemon (218.92.0.223:37580). Dec 13 14:00:05.319614 sshd[4329]: Connection closed by 139.178.68.195 port 56444 Dec 13 14:00:05.320968 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:05.328065 systemd[1]: sshd@23-10.244.15.30:22-139.178.68.195:56444.service: Deactivated successfully. Dec 13 14:00:05.332215 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:00:05.333838 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:00:05.335725 systemd-logind[1486]: Removed session 21. Dec 13 14:00:07.175790 sshd-session[4344]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 14:00:08.429370 sshd[4340]: PAM: Permission denied for root from 218.92.0.223 Dec 13 14:00:08.948228 sshd-session[4345]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 14:00:10.491011 systemd[1]: Started sshd@25-10.244.15.30:22-139.178.68.195:34350.service - OpenSSH per-connection server daemon (139.178.68.195:34350). Dec 13 14:00:11.146210 sshd[4340]: PAM: Permission denied for root from 218.92.0.223 Dec 13 14:00:11.403781 sshd[4347]: Accepted publickey for core from 139.178.68.195 port 34350 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:00:11.405859 sshd-session[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:00:11.414557 systemd-logind[1486]: New session 22 of user core. Dec 13 14:00:11.423543 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 14:00:11.665453 sshd-session[4352]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 14:00:12.140989 sshd[4353]: Connection closed by 139.178.68.195 port 34350 Dec 13 14:00:12.142006 sshd-session[4347]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:12.147479 systemd[1]: sshd@25-10.244.15.30:22-139.178.68.195:34350.service: Deactivated successfully. Dec 13 14:00:12.150807 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:00:12.152699 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:00:12.155148 systemd-logind[1486]: Removed session 22. Dec 13 14:00:12.939854 sshd[4340]: PAM: Permission denied for root from 218.92.0.223 Dec 13 14:00:13.200629 sshd[4340]: Received disconnect from 218.92.0.223 port 37580:11: [preauth] Dec 13 14:00:13.200629 sshd[4340]: Disconnected from authenticating user root 218.92.0.223 port 37580 [preauth] Dec 13 14:00:13.204379 systemd[1]: sshd@24-10.244.15.30:22-218.92.0.223:37580.service: Deactivated successfully. Dec 13 14:00:17.300707 systemd[1]: Started sshd@26-10.244.15.30:22-139.178.68.195:46062.service - OpenSSH per-connection server daemon (139.178.68.195:46062). Dec 13 14:00:18.211001 sshd[4366]: Accepted publickey for core from 139.178.68.195 port 46062 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:00:18.213092 sshd-session[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:00:18.220657 systemd-logind[1486]: New session 23 of user core. Dec 13 14:00:18.233580 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 14:00:18.927480 sshd[4368]: Connection closed by 139.178.68.195 port 46062 Dec 13 14:00:18.928708 sshd-session[4366]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:18.934487 systemd[1]: sshd@26-10.244.15.30:22-139.178.68.195:46062.service: Deactivated successfully. Dec 13 14:00:18.937076 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:00:18.938051 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:00:18.940143 systemd-logind[1486]: Removed session 23. Dec 13 14:00:24.085718 systemd[1]: Started sshd@27-10.244.15.30:22-139.178.68.195:46078.service - OpenSSH per-connection server daemon (139.178.68.195:46078). Dec 13 14:00:24.995025 sshd[4384]: Accepted publickey for core from 139.178.68.195 port 46078 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:00:24.997370 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:00:25.006686 systemd-logind[1486]: New session 24 of user core. Dec 13 14:00:25.011596 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 14:00:25.714344 sshd[4386]: Connection closed by 139.178.68.195 port 46078 Dec 13 14:00:25.715798 sshd-session[4384]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:25.721110 systemd[1]: sshd@27-10.244.15.30:22-139.178.68.195:46078.service: Deactivated successfully. Dec 13 14:00:25.725172 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:00:25.726564 systemd-logind[1486]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:00:25.728582 systemd-logind[1486]: Removed session 24. Dec 13 14:00:25.874693 systemd[1]: Started sshd@28-10.244.15.30:22-139.178.68.195:46092.service - OpenSSH per-connection server daemon (139.178.68.195:46092). Dec 13 14:00:26.810672 sshd[4397]: Accepted publickey for core from 139.178.68.195 port 46092 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:00:26.812716 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:00:26.819970 systemd-logind[1486]: New session 25 of user core. Dec 13 14:00:26.824535 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 14:00:28.916916 containerd[1508]: time="2024-12-13T14:00:28.916648858Z" level=info msg="StopContainer for \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\" with timeout 30 (s)" Dec 13 14:00:28.922822 containerd[1508]: time="2024-12-13T14:00:28.922756396Z" level=info msg="Stop container \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\" with signal terminated" Dec 13 14:00:28.986051 systemd[1]: cri-containerd-9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead.scope: Deactivated successfully. Dec 13 14:00:29.029919 containerd[1508]: time="2024-12-13T14:00:29.029513232Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:00:29.040800 containerd[1508]: time="2024-12-13T14:00:29.040459990Z" level=info msg="StopContainer for \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\" with timeout 2 (s)" Dec 13 14:00:29.044909 containerd[1508]: time="2024-12-13T14:00:29.041091120Z" level=info msg="Stop container \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\" with signal terminated" Dec 13 14:00:29.049508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead-rootfs.mount: Deactivated successfully. Dec 13 14:00:29.057117 containerd[1508]: time="2024-12-13T14:00:29.056789096Z" level=info msg="shim disconnected" id=9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead namespace=k8s.io Dec 13 14:00:29.057554 containerd[1508]: time="2024-12-13T14:00:29.057391863Z" level=warning msg="cleaning up after shim disconnected" id=9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead namespace=k8s.io Dec 13 14:00:29.057554 containerd[1508]: time="2024-12-13T14:00:29.057424912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:00:29.064349 systemd-networkd[1417]: lxc_health: Link DOWN Dec 13 14:00:29.064363 systemd-networkd[1417]: lxc_health: Lost carrier Dec 13 14:00:29.093003 containerd[1508]: time="2024-12-13T14:00:29.091909458Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:00:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 14:00:29.092429 systemd[1]: cri-containerd-fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5.scope: Deactivated successfully. Dec 13 14:00:29.092822 systemd[1]: cri-containerd-fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5.scope: Consumed 10.593s CPU time. Dec 13 14:00:29.097285 containerd[1508]: time="2024-12-13T14:00:29.097247560Z" level=info msg="StopContainer for \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\" returns successfully" Dec 13 14:00:29.103776 containerd[1508]: time="2024-12-13T14:00:29.103731223Z" level=info msg="StopPodSandbox for \"777b1af8a1cb1c2512ab25effbb4436596ac09b251409d3b85ec49dd5c477b86\"" Dec 13 14:00:29.110817 containerd[1508]: time="2024-12-13T14:00:29.105900343Z" level=info msg="Container to stop \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:00:29.117521 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-777b1af8a1cb1c2512ab25effbb4436596ac09b251409d3b85ec49dd5c477b86-shm.mount: Deactivated successfully. Dec 13 14:00:29.139727 systemd[1]: cri-containerd-777b1af8a1cb1c2512ab25effbb4436596ac09b251409d3b85ec49dd5c477b86.scope: Deactivated successfully. Dec 13 14:00:29.147181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5-rootfs.mount: Deactivated successfully. Dec 13 14:00:29.157085 containerd[1508]: time="2024-12-13T14:00:29.156252584Z" level=info msg="shim disconnected" id=fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5 namespace=k8s.io Dec 13 14:00:29.157085 containerd[1508]: time="2024-12-13T14:00:29.156632215Z" level=warning msg="cleaning up after shim disconnected" id=fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5 namespace=k8s.io Dec 13 14:00:29.157085 containerd[1508]: time="2024-12-13T14:00:29.156835691Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:00:29.182035 kubelet[2758]: E1213 14:00:29.181824 2758 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:00:29.189529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-777b1af8a1cb1c2512ab25effbb4436596ac09b251409d3b85ec49dd5c477b86-rootfs.mount: Deactivated successfully. Dec 13 14:00:29.195558 containerd[1508]: time="2024-12-13T14:00:29.195516205Z" level=info msg="StopContainer for \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\" returns successfully" Dec 13 14:00:29.196175 containerd[1508]: time="2024-12-13T14:00:29.196140920Z" level=info msg="StopPodSandbox for \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\"" Dec 13 14:00:29.196260 containerd[1508]: time="2024-12-13T14:00:29.196177145Z" level=info msg="Container to stop \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:00:29.196260 containerd[1508]: time="2024-12-13T14:00:29.196220169Z" level=info msg="Container to stop \"50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:00:29.196413 containerd[1508]: time="2024-12-13T14:00:29.196262871Z" level=info msg="Container to stop \"baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:00:29.196413 containerd[1508]: time="2024-12-13T14:00:29.196284998Z" level=info msg="Container to stop \"d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:00:29.196413 containerd[1508]: time="2024-12-13T14:00:29.196335813Z" level=info msg="Container to stop \"c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:00:29.199551 containerd[1508]: time="2024-12-13T14:00:29.199504571Z" level=info msg="shim disconnected" id=777b1af8a1cb1c2512ab25effbb4436596ac09b251409d3b85ec49dd5c477b86 namespace=k8s.io Dec 13 14:00:29.199641 containerd[1508]: time="2024-12-13T14:00:29.199554706Z" level=warning msg="cleaning up after shim disconnected" id=777b1af8a1cb1c2512ab25effbb4436596ac09b251409d3b85ec49dd5c477b86 namespace=k8s.io Dec 13 14:00:29.199641 containerd[1508]: time="2024-12-13T14:00:29.199571048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:00:29.210221 systemd[1]: cri-containerd-af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d.scope: Deactivated successfully. Dec 13 14:00:29.224374 containerd[1508]: time="2024-12-13T14:00:29.224212913Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:00:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 14:00:29.227538 containerd[1508]: time="2024-12-13T14:00:29.227496014Z" level=info msg="TearDown network for sandbox \"777b1af8a1cb1c2512ab25effbb4436596ac09b251409d3b85ec49dd5c477b86\" successfully" Dec 13 14:00:29.227538 containerd[1508]: time="2024-12-13T14:00:29.227530718Z" level=info msg="StopPodSandbox for \"777b1af8a1cb1c2512ab25effbb4436596ac09b251409d3b85ec49dd5c477b86\" returns successfully" Dec 13 14:00:29.261565 containerd[1508]: time="2024-12-13T14:00:29.261140128Z" level=info msg="shim disconnected" id=af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d namespace=k8s.io Dec 13 14:00:29.261846 containerd[1508]: time="2024-12-13T14:00:29.261760128Z" level=warning msg="cleaning up after shim disconnected" id=af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d namespace=k8s.io Dec 13 14:00:29.261846 containerd[1508]: time="2024-12-13T14:00:29.261783966Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:00:29.285638 containerd[1508]: time="2024-12-13T14:00:29.285549514Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:00:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 14:00:29.287226 containerd[1508]: time="2024-12-13T14:00:29.287186050Z" level=info msg="TearDown network for sandbox \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\" successfully" Dec 13 14:00:29.287226 containerd[1508]: time="2024-12-13T14:00:29.287220976Z" level=info msg="StopPodSandbox for \"af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d\" returns successfully" Dec 13 14:00:29.307068 kubelet[2758]: I1213 14:00:29.306969 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d8e0e9b-b144-4e04-9e2c-1198c1ae9000-cilium-config-path\") pod \"7d8e0e9b-b144-4e04-9e2c-1198c1ae9000\" (UID: \"7d8e0e9b-b144-4e04-9e2c-1198c1ae9000\") " Dec 13 14:00:29.307068 kubelet[2758]: I1213 14:00:29.307053 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmg95\" (UniqueName: \"kubernetes.io/projected/7d8e0e9b-b144-4e04-9e2c-1198c1ae9000-kube-api-access-rmg95\") pod \"7d8e0e9b-b144-4e04-9e2c-1198c1ae9000\" (UID: \"7d8e0e9b-b144-4e04-9e2c-1198c1ae9000\") " Dec 13 14:00:29.316510 kubelet[2758]: I1213 14:00:29.313062 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d8e0e9b-b144-4e04-9e2c-1198c1ae9000-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7d8e0e9b-b144-4e04-9e2c-1198c1ae9000" (UID: "7d8e0e9b-b144-4e04-9e2c-1198c1ae9000"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:00:29.337199 kubelet[2758]: I1213 14:00:29.337076 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d8e0e9b-b144-4e04-9e2c-1198c1ae9000-kube-api-access-rmg95" (OuterVolumeSpecName: "kube-api-access-rmg95") pod "7d8e0e9b-b144-4e04-9e2c-1198c1ae9000" (UID: "7d8e0e9b-b144-4e04-9e2c-1198c1ae9000"). InnerVolumeSpecName "kube-api-access-rmg95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:00:29.408360 kubelet[2758]: I1213 14:00:29.407572 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cilium-config-path\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.408360 kubelet[2758]: I1213 14:00:29.407674 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-clustermesh-secrets\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.408360 kubelet[2758]: I1213 14:00:29.407718 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-host-proc-sys-kernel\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.408360 kubelet[2758]: I1213 14:00:29.407749 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-lib-modules\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.408360 kubelet[2758]: I1213 14:00:29.407786 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cilium-run\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.408360 kubelet[2758]: I1213 14:00:29.407815 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-hostproc\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.408887 kubelet[2758]: I1213 14:00:29.407845 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cni-path\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.408887 kubelet[2758]: I1213 14:00:29.407870 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-bpf-maps\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.408887 kubelet[2758]: I1213 14:00:29.407896 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-xtables-lock\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.408887 kubelet[2758]: I1213 14:00:29.407943 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-etc-cni-netd\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.408887 kubelet[2758]: I1213 14:00:29.407973 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cilium-cgroup\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.408887 kubelet[2758]: I1213 14:00:29.408007 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvc5z\" (UniqueName: \"kubernetes.io/projected/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-kube-api-access-jvc5z\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.409251 kubelet[2758]: I1213 14:00:29.408051 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-hubble-tls\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.409251 kubelet[2758]: I1213 14:00:29.408087 2758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-host-proc-sys-net\") pod \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\" (UID: \"9737dbab-2dd5-4c28-9499-7b32a40f1ac5\") " Dec 13 14:00:29.410399 kubelet[2758]: I1213 14:00:29.409387 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cni-path" (OuterVolumeSpecName: "cni-path") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:00:29.413195 kubelet[2758]: I1213 14:00:29.413148 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:00:29.414780 kubelet[2758]: I1213 14:00:29.414180 2758 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d8e0e9b-b144-4e04-9e2c-1198c1ae9000-cilium-config-path\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.414780 kubelet[2758]: I1213 14:00:29.414219 2758 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rmg95\" (UniqueName: \"kubernetes.io/projected/7d8e0e9b-b144-4e04-9e2c-1198c1ae9000-kube-api-access-rmg95\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.414780 kubelet[2758]: I1213 14:00:29.414268 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:00:29.414780 kubelet[2758]: I1213 14:00:29.414332 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:00:29.414780 kubelet[2758]: I1213 14:00:29.414365 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:00:29.415086 kubelet[2758]: I1213 14:00:29.414392 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:00:29.415086 kubelet[2758]: I1213 14:00:29.414419 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:00:29.417331 kubelet[2758]: I1213 14:00:29.417214 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:00:29.417423 kubelet[2758]: I1213 14:00:29.417372 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:00:29.417423 kubelet[2758]: I1213 14:00:29.417408 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:00:29.417553 kubelet[2758]: I1213 14:00:29.417436 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:00:29.417553 kubelet[2758]: I1213 14:00:29.417464 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-hostproc" (OuterVolumeSpecName: "hostproc") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:00:29.418473 kubelet[2758]: I1213 14:00:29.418432 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-kube-api-access-jvc5z" (OuterVolumeSpecName: "kube-api-access-jvc5z") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "kube-api-access-jvc5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:00:29.420427 kubelet[2758]: I1213 14:00:29.420394 2758 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9737dbab-2dd5-4c28-9499-7b32a40f1ac5" (UID: "9737dbab-2dd5-4c28-9499-7b32a40f1ac5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:00:29.446630 kubelet[2758]: I1213 14:00:29.445761 2758 scope.go:117] "RemoveContainer" containerID="9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead" Dec 13 14:00:29.449479 systemd[1]: Removed slice kubepods-besteffort-pod7d8e0e9b_b144_4e04_9e2c_1198c1ae9000.slice - libcontainer container kubepods-besteffort-pod7d8e0e9b_b144_4e04_9e2c_1198c1ae9000.slice. Dec 13 14:00:29.457519 systemd[1]: Removed slice kubepods-burstable-pod9737dbab_2dd5_4c28_9499_7b32a40f1ac5.slice - libcontainer container kubepods-burstable-pod9737dbab_2dd5_4c28_9499_7b32a40f1ac5.slice. Dec 13 14:00:29.458145 systemd[1]: kubepods-burstable-pod9737dbab_2dd5_4c28_9499_7b32a40f1ac5.slice: Consumed 10.723s CPU time. Dec 13 14:00:29.486182 containerd[1508]: time="2024-12-13T14:00:29.486045017Z" level=info msg="RemoveContainer for \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\"" Dec 13 14:00:29.495325 containerd[1508]: time="2024-12-13T14:00:29.494591343Z" level=info msg="RemoveContainer for \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\" returns successfully" Dec 13 14:00:29.495514 kubelet[2758]: I1213 14:00:29.494989 2758 scope.go:117] "RemoveContainer" containerID="9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead" Dec 13 14:00:29.495594 containerd[1508]: time="2024-12-13T14:00:29.495284160Z" level=error msg="ContainerStatus for \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\": not found" Dec 13 14:00:29.501121 kubelet[2758]: E1213 14:00:29.498844 2758 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\": not found" containerID="9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead" Dec 13 14:00:29.513187 kubelet[2758]: I1213 14:00:29.513132 2758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead"} err="failed to get container status \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\": rpc error: code = NotFound desc = an error occurred when try to find container \"9181f3cb0b816870d0d71c6f1a4474c9106e5d42a195acf615cc6e23df806ead\": not found" Dec 13 14:00:29.513187 kubelet[2758]: I1213 14:00:29.513183 2758 scope.go:117] "RemoveContainer" containerID="fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5" Dec 13 14:00:29.515783 kubelet[2758]: I1213 14:00:29.515269 2758 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-bpf-maps\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.515783 kubelet[2758]: I1213 14:00:29.515328 2758 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-xtables-lock\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.515783 kubelet[2758]: I1213 14:00:29.515348 2758 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-etc-cni-netd\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.515783 kubelet[2758]: I1213 14:00:29.515372 2758 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jvc5z\" (UniqueName: \"kubernetes.io/projected/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-kube-api-access-jvc5z\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.515783 kubelet[2758]: I1213 14:00:29.515399 2758 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cilium-cgroup\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.515783 kubelet[2758]: I1213 14:00:29.515421 2758 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-hubble-tls\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.515783 kubelet[2758]: I1213 14:00:29.515454 2758 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-host-proc-sys-net\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.515783 kubelet[2758]: I1213 14:00:29.515475 2758 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-clustermesh-secrets\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.516248 kubelet[2758]: I1213 14:00:29.515494 2758 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cilium-config-path\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.516248 kubelet[2758]: I1213 14:00:29.515510 2758 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-lib-modules\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.516248 kubelet[2758]: I1213 14:00:29.515527 2758 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-host-proc-sys-kernel\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.516248 kubelet[2758]: I1213 14:00:29.515543 2758 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-hostproc\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.516248 kubelet[2758]: I1213 14:00:29.515560 2758 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cilium-run\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.516248 kubelet[2758]: I1213 14:00:29.515576 2758 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9737dbab-2dd5-4c28-9499-7b32a40f1ac5-cni-path\") on node \"srv-3exgq.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:00:29.520314 containerd[1508]: time="2024-12-13T14:00:29.518490861Z" level=info msg="RemoveContainer for \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\"" Dec 13 14:00:29.523870 containerd[1508]: time="2024-12-13T14:00:29.523779902Z" level=info msg="RemoveContainer for \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\" returns successfully" Dec 13 14:00:29.528823 kubelet[2758]: I1213 14:00:29.528059 2758 scope.go:117] "RemoveContainer" containerID="c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98" Dec 13 14:00:29.532308 containerd[1508]: time="2024-12-13T14:00:29.530938705Z" level=info msg="RemoveContainer for \"c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98\"" Dec 13 14:00:29.535176 containerd[1508]: time="2024-12-13T14:00:29.535069017Z" level=info msg="RemoveContainer for \"c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98\" returns successfully" Dec 13 14:00:29.538462 kubelet[2758]: I1213 14:00:29.538429 2758 scope.go:117] "RemoveContainer" containerID="d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2" Dec 13 14:00:29.542477 containerd[1508]: time="2024-12-13T14:00:29.542427740Z" level=info msg="RemoveContainer for \"d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2\"" Dec 13 14:00:29.546014 containerd[1508]: time="2024-12-13T14:00:29.545980284Z" level=info msg="RemoveContainer for \"d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2\" returns successfully" Dec 13 14:00:29.546355 kubelet[2758]: I1213 14:00:29.546206 2758 scope.go:117] "RemoveContainer" containerID="baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348" Dec 13 14:00:29.549428 containerd[1508]: time="2024-12-13T14:00:29.548617978Z" level=info msg="RemoveContainer for \"baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348\"" Dec 13 14:00:29.553566 containerd[1508]: time="2024-12-13T14:00:29.553172196Z" level=info msg="RemoveContainer for \"baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348\" returns successfully" Dec 13 14:00:29.553744 kubelet[2758]: I1213 14:00:29.553472 2758 scope.go:117] "RemoveContainer" containerID="50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2" Dec 13 14:00:29.555022 containerd[1508]: time="2024-12-13T14:00:29.554970253Z" level=info msg="RemoveContainer for \"50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2\"" Dec 13 14:00:29.557940 containerd[1508]: time="2024-12-13T14:00:29.557908095Z" level=info msg="RemoveContainer for \"50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2\" returns successfully" Dec 13 14:00:29.558378 kubelet[2758]: I1213 14:00:29.558216 2758 scope.go:117] "RemoveContainer" containerID="fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5" Dec 13 14:00:29.558693 containerd[1508]: time="2024-12-13T14:00:29.558589237Z" level=error msg="ContainerStatus for \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\": not found" Dec 13 14:00:29.559140 kubelet[2758]: E1213 14:00:29.559017 2758 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\": not found" containerID="fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5" Dec 13 14:00:29.559140 kubelet[2758]: I1213 14:00:29.559097 2758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5"} err="failed to get container status \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc35a2c70f8ab3050196a55ba3ec56e022cdfd3b06d78ea35de635a765256fe5\": not found" Dec 13 14:00:29.559645 kubelet[2758]: I1213 14:00:29.559362 2758 scope.go:117] "RemoveContainer" containerID="c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98" Dec 13 14:00:29.560012 containerd[1508]: time="2024-12-13T14:00:29.559870523Z" level=error msg="ContainerStatus for \"c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98\": not found" Dec 13 14:00:29.560351 kubelet[2758]: E1213 14:00:29.560203 2758 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98\": not found" containerID="c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98" Dec 13 14:00:29.560351 kubelet[2758]: I1213 14:00:29.560262 2758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98"} err="failed to get container status \"c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98\": rpc error: code = NotFound desc = an error occurred when try to find container \"c74adc6d9e5e00d37387e377226bff4061aa83f5b141e5280646621e32a3ad98\": not found" Dec 13 14:00:29.560351 kubelet[2758]: I1213 14:00:29.560280 2758 scope.go:117] "RemoveContainer" containerID="d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2" Dec 13 14:00:29.561590 containerd[1508]: time="2024-12-13T14:00:29.560814286Z" level=error msg="ContainerStatus for \"d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2\": not found" Dec 13 14:00:29.561677 kubelet[2758]: E1213 14:00:29.560955 2758 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2\": not found" containerID="d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2" Dec 13 14:00:29.561677 kubelet[2758]: I1213 14:00:29.560989 2758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2"} err="failed to get container status \"d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"d15e67cc1e3e88c52e730f2cb837e07599ea9afca62365d69e9b487f6b4453a2\": not found" Dec 13 14:00:29.561677 kubelet[2758]: I1213 14:00:29.561014 2758 scope.go:117] "RemoveContainer" containerID="baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348" Dec 13 14:00:29.566413 containerd[1508]: time="2024-12-13T14:00:29.561205977Z" level=error msg="ContainerStatus for \"baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348\": not found" Dec 13 14:00:29.566413 containerd[1508]: time="2024-12-13T14:00:29.566343529Z" level=error msg="ContainerStatus for \"50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2\": not found" Dec 13 14:00:29.566537 kubelet[2758]: E1213 14:00:29.566000 2758 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348\": not found" containerID="baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348" Dec 13 14:00:29.566537 kubelet[2758]: I1213 14:00:29.566066 2758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348"} err="failed to get container status \"baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348\": rpc error: code = NotFound desc = an error occurred when try to find container \"baac7c88a3c6bd80ac8f0bc42661c0e2af4e9e0755d4f574230c4284a855d348\": not found" Dec 13 14:00:29.566537 kubelet[2758]: I1213 14:00:29.566083 2758 scope.go:117] "RemoveContainer" containerID="50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2" Dec 13 14:00:29.566897 kubelet[2758]: E1213 14:00:29.566815 2758 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2\": not found" containerID="50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2" Dec 13 14:00:29.566897 kubelet[2758]: I1213 14:00:29.566857 2758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2"} err="failed to get container status \"50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"50988bd240f7aa63d724b4c09d0442e410e3ca44ec49bffda212986d092385e2\": not found" Dec 13 14:00:29.872452 kubelet[2758]: I1213 14:00:29.870845 2758 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7d8e0e9b-b144-4e04-9e2c-1198c1ae9000" path="/var/lib/kubelet/pods/7d8e0e9b-b144-4e04-9e2c-1198c1ae9000/volumes" Dec 13 14:00:29.872452 kubelet[2758]: I1213 14:00:29.871765 2758 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9737dbab-2dd5-4c28-9499-7b32a40f1ac5" path="/var/lib/kubelet/pods/9737dbab-2dd5-4c28-9499-7b32a40f1ac5/volumes" Dec 13 14:00:29.998168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d-rootfs.mount: Deactivated successfully. Dec 13 14:00:29.999063 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af68ad367de1b2acd9721fdb519c71fffd5951c262110b51aa0168a3c63cd85d-shm.mount: Deactivated successfully. Dec 13 14:00:29.999205 systemd[1]: var-lib-kubelet-pods-9737dbab\x2d2dd5\x2d4c28\x2d9499\x2d7b32a40f1ac5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djvc5z.mount: Deactivated successfully. Dec 13 14:00:29.999390 systemd[1]: var-lib-kubelet-pods-7d8e0e9b\x2db144\x2d4e04\x2d9e2c\x2d1198c1ae9000-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drmg95.mount: Deactivated successfully. Dec 13 14:00:29.999927 systemd[1]: var-lib-kubelet-pods-9737dbab\x2d2dd5\x2d4c28\x2d9499\x2d7b32a40f1ac5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:00:30.000075 systemd[1]: var-lib-kubelet-pods-9737dbab\x2d2dd5\x2d4c28\x2d9499\x2d7b32a40f1ac5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:00:30.906537 sshd[4399]: Connection closed by 139.178.68.195 port 46092 Dec 13 14:00:30.908989 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:30.913762 systemd-logind[1486]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:00:30.915211 systemd[1]: sshd@28-10.244.15.30:22-139.178.68.195:46092.service: Deactivated successfully. Dec 13 14:00:30.918649 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:00:30.921011 systemd-logind[1486]: Removed session 25. Dec 13 14:00:31.065667 systemd[1]: Started sshd@29-10.244.15.30:22-139.178.68.195:33304.service - OpenSSH per-connection server daemon (139.178.68.195:33304). Dec 13 14:00:31.984338 sshd[4561]: Accepted publickey for core from 139.178.68.195 port 33304 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:00:31.986657 sshd-session[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:00:31.995456 systemd-logind[1486]: New session 26 of user core. Dec 13 14:00:32.004591 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 14:00:33.357827 kubelet[2758]: I1213 14:00:33.357678 2758 topology_manager.go:215] "Topology Admit Handler" podUID="1dd8be69-97b2-49f6-8b33-dcb94675bb8a" podNamespace="kube-system" podName="cilium-hmqlj" Dec 13 14:00:33.359049 kubelet[2758]: E1213 14:00:33.357912 2758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9737dbab-2dd5-4c28-9499-7b32a40f1ac5" containerName="mount-cgroup" Dec 13 14:00:33.359049 kubelet[2758]: E1213 14:00:33.357944 2758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9737dbab-2dd5-4c28-9499-7b32a40f1ac5" containerName="apply-sysctl-overwrites" Dec 13 14:00:33.359049 kubelet[2758]: E1213 14:00:33.357958 2758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9737dbab-2dd5-4c28-9499-7b32a40f1ac5" containerName="mount-bpf-fs" Dec 13 14:00:33.359049 kubelet[2758]: E1213 14:00:33.357983 2758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9737dbab-2dd5-4c28-9499-7b32a40f1ac5" containerName="cilium-agent" Dec 13 14:00:33.359049 kubelet[2758]: E1213 14:00:33.357997 2758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7d8e0e9b-b144-4e04-9e2c-1198c1ae9000" containerName="cilium-operator" Dec 13 14:00:33.359049 kubelet[2758]: E1213 14:00:33.358023 2758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9737dbab-2dd5-4c28-9499-7b32a40f1ac5" containerName="clean-cilium-state" Dec 13 14:00:33.359049 kubelet[2758]: I1213 14:00:33.358123 2758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d8e0e9b-b144-4e04-9e2c-1198c1ae9000" containerName="cilium-operator" Dec 13 14:00:33.359049 kubelet[2758]: I1213 14:00:33.358141 2758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9737dbab-2dd5-4c28-9499-7b32a40f1ac5" containerName="cilium-agent" Dec 13 14:00:33.394533 systemd[1]: Created slice kubepods-burstable-pod1dd8be69_97b2_49f6_8b33_dcb94675bb8a.slice - libcontainer container kubepods-burstable-pod1dd8be69_97b2_49f6_8b33_dcb94675bb8a.slice. Dec 13 14:00:33.443506 kubelet[2758]: I1213 14:00:33.443436 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-cilium-ipsec-secrets\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.443506 kubelet[2758]: I1213 14:00:33.443510 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-hubble-tls\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.443977 kubelet[2758]: I1213 14:00:33.443548 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-host-proc-sys-kernel\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.443977 kubelet[2758]: I1213 14:00:33.443585 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-clustermesh-secrets\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.443977 kubelet[2758]: I1213 14:00:33.443618 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-etc-cni-netd\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.443977 kubelet[2758]: I1213 14:00:33.443700 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-xtables-lock\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.443977 kubelet[2758]: I1213 14:00:33.443737 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-host-proc-sys-net\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.443977 kubelet[2758]: I1213 14:00:33.443807 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-cilium-cgroup\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.445411 kubelet[2758]: I1213 14:00:33.443867 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89c5p\" (UniqueName: \"kubernetes.io/projected/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-kube-api-access-89c5p\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.445411 kubelet[2758]: I1213 14:00:33.443965 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-bpf-maps\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.445411 kubelet[2758]: I1213 14:00:33.444002 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-hostproc\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.445411 kubelet[2758]: I1213 14:00:33.444034 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-lib-modules\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.445411 kubelet[2758]: I1213 14:00:33.444064 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-cilium-config-path\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.445411 kubelet[2758]: I1213 14:00:33.444117 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-cni-path\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.445754 kubelet[2758]: I1213 14:00:33.444156 2758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1dd8be69-97b2-49f6-8b33-dcb94675bb8a-cilium-run\") pod \"cilium-hmqlj\" (UID: \"1dd8be69-97b2-49f6-8b33-dcb94675bb8a\") " pod="kube-system/cilium-hmqlj" Dec 13 14:00:33.452102 sshd[4563]: Connection closed by 139.178.68.195 port 33304 Dec 13 14:00:33.451828 sshd-session[4561]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:33.463882 systemd-logind[1486]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:00:33.465773 systemd[1]: sshd@29-10.244.15.30:22-139.178.68.195:33304.service: Deactivated successfully. Dec 13 14:00:33.471940 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:00:33.474402 systemd-logind[1486]: Removed session 26. Dec 13 14:00:33.612762 systemd[1]: Started sshd@30-10.244.15.30:22-139.178.68.195:33312.service - OpenSSH per-connection server daemon (139.178.68.195:33312). Dec 13 14:00:33.703422 containerd[1508]: time="2024-12-13T14:00:33.703269764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hmqlj,Uid:1dd8be69-97b2-49f6-8b33-dcb94675bb8a,Namespace:kube-system,Attempt:0,}" Dec 13 14:00:33.746359 containerd[1508]: time="2024-12-13T14:00:33.744969791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:00:33.746359 containerd[1508]: time="2024-12-13T14:00:33.745142159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:00:33.746359 containerd[1508]: time="2024-12-13T14:00:33.745167802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:00:33.746359 containerd[1508]: time="2024-12-13T14:00:33.745395988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:00:33.786554 systemd[1]: Started cri-containerd-1d5dee47ac632d8539032117c8656fbd3b2b4dad3fa601f9b58ed903c4b86eaa.scope - libcontainer container 1d5dee47ac632d8539032117c8656fbd3b2b4dad3fa601f9b58ed903c4b86eaa. Dec 13 14:00:33.831368 containerd[1508]: time="2024-12-13T14:00:33.831242216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hmqlj,Uid:1dd8be69-97b2-49f6-8b33-dcb94675bb8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d5dee47ac632d8539032117c8656fbd3b2b4dad3fa601f9b58ed903c4b86eaa\"" Dec 13 14:00:33.839045 containerd[1508]: time="2024-12-13T14:00:33.838977341Z" level=info msg="CreateContainer within sandbox \"1d5dee47ac632d8539032117c8656fbd3b2b4dad3fa601f9b58ed903c4b86eaa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:00:33.888084 containerd[1508]: time="2024-12-13T14:00:33.887907580Z" level=info msg="CreateContainer within sandbox \"1d5dee47ac632d8539032117c8656fbd3b2b4dad3fa601f9b58ed903c4b86eaa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0b0704a5a5ef15048a9cb1b536cddd4a4a059ab2f24093aa2ea4b94eadfedde0\"" Dec 13 14:00:33.889578 containerd[1508]: time="2024-12-13T14:00:33.889531224Z" level=info msg="StartContainer for \"0b0704a5a5ef15048a9cb1b536cddd4a4a059ab2f24093aa2ea4b94eadfedde0\"" Dec 13 14:00:33.934684 systemd[1]: Started cri-containerd-0b0704a5a5ef15048a9cb1b536cddd4a4a059ab2f24093aa2ea4b94eadfedde0.scope - libcontainer container 0b0704a5a5ef15048a9cb1b536cddd4a4a059ab2f24093aa2ea4b94eadfedde0. Dec 13 14:00:33.985721 containerd[1508]: time="2024-12-13T14:00:33.985643879Z" level=info msg="StartContainer for \"0b0704a5a5ef15048a9cb1b536cddd4a4a059ab2f24093aa2ea4b94eadfedde0\" returns successfully" Dec 13 14:00:34.005045 systemd[1]: cri-containerd-0b0704a5a5ef15048a9cb1b536cddd4a4a059ab2f24093aa2ea4b94eadfedde0.scope: Deactivated successfully. Dec 13 14:00:34.059451 containerd[1508]: time="2024-12-13T14:00:34.059211034Z" level=info msg="shim disconnected" id=0b0704a5a5ef15048a9cb1b536cddd4a4a059ab2f24093aa2ea4b94eadfedde0 namespace=k8s.io Dec 13 14:00:34.060035 containerd[1508]: time="2024-12-13T14:00:34.059449391Z" level=warning msg="cleaning up after shim disconnected" id=0b0704a5a5ef15048a9cb1b536cddd4a4a059ab2f24093aa2ea4b94eadfedde0 namespace=k8s.io Dec 13 14:00:34.060035 containerd[1508]: time="2024-12-13T14:00:34.059579251Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:00:34.184218 kubelet[2758]: E1213 14:00:34.183989 2758 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:00:34.477809 containerd[1508]: time="2024-12-13T14:00:34.477364193Z" level=info msg="CreateContainer within sandbox \"1d5dee47ac632d8539032117c8656fbd3b2b4dad3fa601f9b58ed903c4b86eaa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:00:34.500244 containerd[1508]: time="2024-12-13T14:00:34.500002159Z" level=info msg="CreateContainer within sandbox \"1d5dee47ac632d8539032117c8656fbd3b2b4dad3fa601f9b58ed903c4b86eaa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"693b8e9cd0b5a3e0818915e7c367685a6f6cd57c88928f42532a25f3f6ee0e55\"" Dec 13 14:00:34.505101 containerd[1508]: time="2024-12-13T14:00:34.501138706Z" level=info msg="StartContainer for \"693b8e9cd0b5a3e0818915e7c367685a6f6cd57c88928f42532a25f3f6ee0e55\"" Dec 13 14:00:34.521792 sshd[4576]: Accepted publickey for core from 139.178.68.195 port 33312 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:00:34.525223 sshd-session[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:00:34.533944 systemd-logind[1486]: New session 27 of user core. Dec 13 14:00:34.542798 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 14:00:34.556564 systemd[1]: Started cri-containerd-693b8e9cd0b5a3e0818915e7c367685a6f6cd57c88928f42532a25f3f6ee0e55.scope - libcontainer container 693b8e9cd0b5a3e0818915e7c367685a6f6cd57c88928f42532a25f3f6ee0e55. Dec 13 14:00:34.608701 containerd[1508]: time="2024-12-13T14:00:34.608600700Z" level=info msg="StartContainer for \"693b8e9cd0b5a3e0818915e7c367685a6f6cd57c88928f42532a25f3f6ee0e55\" returns successfully" Dec 13 14:00:34.622434 systemd[1]: cri-containerd-693b8e9cd0b5a3e0818915e7c367685a6f6cd57c88928f42532a25f3f6ee0e55.scope: Deactivated successfully. Dec 13 14:00:34.651227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-693b8e9cd0b5a3e0818915e7c367685a6f6cd57c88928f42532a25f3f6ee0e55-rootfs.mount: Deactivated successfully. Dec 13 14:00:34.658521 containerd[1508]: time="2024-12-13T14:00:34.658398214Z" level=info msg="shim disconnected" id=693b8e9cd0b5a3e0818915e7c367685a6f6cd57c88928f42532a25f3f6ee0e55 namespace=k8s.io Dec 13 14:00:34.658700 containerd[1508]: time="2024-12-13T14:00:34.658508427Z" level=warning msg="cleaning up after shim disconnected" id=693b8e9cd0b5a3e0818915e7c367685a6f6cd57c88928f42532a25f3f6ee0e55 namespace=k8s.io Dec 13 14:00:34.658700 containerd[1508]: time="2024-12-13T14:00:34.658543791Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:00:34.677818 containerd[1508]: time="2024-12-13T14:00:34.677738524Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:00:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 14:00:35.138151 sshd[4701]: Connection closed by 139.178.68.195 port 33312 Dec 13 14:00:35.140831 sshd-session[4576]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:35.147487 systemd[1]: sshd@30-10.244.15.30:22-139.178.68.195:33312.service: Deactivated successfully. Dec 13 14:00:35.152030 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:00:35.153539 systemd-logind[1486]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:00:35.155192 systemd-logind[1486]: Removed session 27. Dec 13 14:00:35.296606 systemd[1]: Started sshd@31-10.244.15.30:22-139.178.68.195:33326.service - OpenSSH per-connection server daemon (139.178.68.195:33326). Dec 13 14:00:35.481798 containerd[1508]: time="2024-12-13T14:00:35.481097654Z" level=info msg="CreateContainer within sandbox \"1d5dee47ac632d8539032117c8656fbd3b2b4dad3fa601f9b58ed903c4b86eaa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:00:35.510430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2518039379.mount: Deactivated successfully. Dec 13 14:00:35.540344 containerd[1508]: time="2024-12-13T14:00:35.540027037Z" level=info msg="CreateContainer within sandbox \"1d5dee47ac632d8539032117c8656fbd3b2b4dad3fa601f9b58ed903c4b86eaa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d052f2f2da6c2fbfb836ad023429fe00cd1df3fcf24fb4006e068e99a54e5bca\"" Dec 13 14:00:35.542364 containerd[1508]: time="2024-12-13T14:00:35.541853708Z" level=info msg="StartContainer for \"d052f2f2da6c2fbfb836ad023429fe00cd1df3fcf24fb4006e068e99a54e5bca\"" Dec 13 14:00:35.606261 systemd[1]: run-containerd-runc-k8s.io-d052f2f2da6c2fbfb836ad023429fe00cd1df3fcf24fb4006e068e99a54e5bca-runc.dt4aiS.mount: Deactivated successfully. Dec 13 14:00:35.618519 systemd[1]: Started cri-containerd-d052f2f2da6c2fbfb836ad023429fe00cd1df3fcf24fb4006e068e99a54e5bca.scope - libcontainer container d052f2f2da6c2fbfb836ad023429fe00cd1df3fcf24fb4006e068e99a54e5bca. Dec 13 14:00:35.677810 containerd[1508]: time="2024-12-13T14:00:35.676823725Z" level=info msg="StartContainer for \"d052f2f2da6c2fbfb836ad023429fe00cd1df3fcf24fb4006e068e99a54e5bca\" returns successfully" Dec 13 14:00:35.691546 systemd[1]: cri-containerd-d052f2f2da6c2fbfb836ad023429fe00cd1df3fcf24fb4006e068e99a54e5bca.scope: Deactivated successfully. Dec 13 14:00:35.730474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d052f2f2da6c2fbfb836ad023429fe00cd1df3fcf24fb4006e068e99a54e5bca-rootfs.mount: Deactivated successfully. Dec 13 14:00:35.737562 containerd[1508]: time="2024-12-13T14:00:35.736535611Z" level=info msg="shim disconnected" id=d052f2f2da6c2fbfb836ad023429fe00cd1df3fcf24fb4006e068e99a54e5bca namespace=k8s.io Dec 13 14:00:35.737562 containerd[1508]: time="2024-12-13T14:00:35.736692525Z" level=warning msg="cleaning up after shim disconnected" id=d052f2f2da6c2fbfb836ad023429fe00cd1df3fcf24fb4006e068e99a54e5bca namespace=k8s.io Dec 13 14:00:35.737562 containerd[1508]: time="2024-12-13T14:00:35.736726548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:00:36.208165 sshd[4750]: Accepted publickey for core from 139.178.68.195 port 33326 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 14:00:36.210943 sshd-session[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:00:36.219970 systemd-logind[1486]: New session 28 of user core. Dec 13 14:00:36.231646 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 14:00:36.496280 containerd[1508]: time="2024-12-13T14:00:36.491601192Z" level=info msg="CreateContainer within sandbox \"1d5dee47ac632d8539032117c8656fbd3b2b4dad3fa601f9b58ed903c4b86eaa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:00:36.513318 containerd[1508]: time="2024-12-13T14:00:36.511872546Z" level=info msg="CreateContainer within sandbox \"1d5dee47ac632d8539032117c8656fbd3b2b4dad3fa601f9b58ed903c4b86eaa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1dc28117902b5fb109788f7b189bebe4a2650478609a7671ca59b773c9a60b75\"" Dec 13 14:00:36.515721 containerd[1508]: time="2024-12-13T14:00:36.514204891Z" level=info msg="StartContainer for \"1dc28117902b5fb109788f7b189bebe4a2650478609a7671ca59b773c9a60b75\"" Dec 13 14:00:36.563712 systemd[1]: Started cri-containerd-1dc28117902b5fb109788f7b189bebe4a2650478609a7671ca59b773c9a60b75.scope - libcontainer container 1dc28117902b5fb109788f7b189bebe4a2650478609a7671ca59b773c9a60b75. Dec 13 14:00:36.582029 kubelet[2758]: I1213 14:00:36.581532 2758 setters.go:568] "Node became not ready" node="srv-3exgq.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:00:36Z","lastTransitionTime":"2024-12-13T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:00:36.643254 systemd[1]: cri-containerd-1dc28117902b5fb109788f7b189bebe4a2650478609a7671ca59b773c9a60b75.scope: Deactivated successfully. Dec 13 14:00:36.647020 containerd[1508]: time="2024-12-13T14:00:36.643220137Z" level=info msg="StartContainer for \"1dc28117902b5fb109788f7b189bebe4a2650478609a7671ca59b773c9a60b75\" returns successfully" Dec 13 14:00:36.681785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dc28117902b5fb109788f7b189bebe4a2650478609a7671ca59b773c9a60b75-rootfs.mount: Deactivated successfully. Dec 13 14:00:36.687816 containerd[1508]: time="2024-12-13T14:00:36.687652529Z" level=info msg="shim disconnected" id=1dc28117902b5fb109788f7b189bebe4a2650478609a7671ca59b773c9a60b75 namespace=k8s.io Dec 13 14:00:36.687816 containerd[1508]: time="2024-12-13T14:00:36.687807679Z" level=warning msg="cleaning up after shim disconnected" id=1dc28117902b5fb109788f7b189bebe4a2650478609a7671ca59b773c9a60b75 namespace=k8s.io Dec 13 14:00:36.688073 containerd[1508]: time="2024-12-13T14:00:36.687830499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:00:37.506454 containerd[1508]: time="2024-12-13T14:00:37.506230225Z" level=info msg="CreateContainer within sandbox \"1d5dee47ac632d8539032117c8656fbd3b2b4dad3fa601f9b58ed903c4b86eaa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:00:37.530478 containerd[1508]: time="2024-12-13T14:00:37.530200351Z" level=info msg="CreateContainer within sandbox \"1d5dee47ac632d8539032117c8656fbd3b2b4dad3fa601f9b58ed903c4b86eaa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"40fdfbef244dab7d97a3a0e24e7dd77fbe2e1c1a848307c7070719956a1ea7bd\"" Dec 13 14:00:37.532845 containerd[1508]: time="2024-12-13T14:00:37.531528368Z" level=info msg="StartContainer for \"40fdfbef244dab7d97a3a0e24e7dd77fbe2e1c1a848307c7070719956a1ea7bd\"" Dec 13 14:00:37.620622 systemd[1]: run-containerd-runc-k8s.io-40fdfbef244dab7d97a3a0e24e7dd77fbe2e1c1a848307c7070719956a1ea7bd-runc.A8XESm.mount: Deactivated successfully. Dec 13 14:00:37.638519 systemd[1]: Started cri-containerd-40fdfbef244dab7d97a3a0e24e7dd77fbe2e1c1a848307c7070719956a1ea7bd.scope - libcontainer container 40fdfbef244dab7d97a3a0e24e7dd77fbe2e1c1a848307c7070719956a1ea7bd. Dec 13 14:00:37.769379 containerd[1508]: time="2024-12-13T14:00:37.768863079Z" level=info msg="StartContainer for \"40fdfbef244dab7d97a3a0e24e7dd77fbe2e1c1a848307c7070719956a1ea7bd\" returns successfully" Dec 13 14:00:38.542076 kubelet[2758]: I1213 14:00:38.541943 2758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-hmqlj" podStartSLOduration=5.541641519 podStartE2EDuration="5.541641519s" podCreationTimestamp="2024-12-13 14:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:00:38.539815547 +0000 UTC m=+154.871750685" watchObservedRunningTime="2024-12-13 14:00:38.541641519 +0000 UTC m=+154.873576644" Dec 13 14:00:38.645361 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:00:42.605435 systemd-networkd[1417]: lxc_health: Link UP Dec 13 14:00:42.614123 systemd-networkd[1417]: lxc_health: Gained carrier Dec 13 14:00:43.689482 systemd[1]: run-containerd-runc-k8s.io-40fdfbef244dab7d97a3a0e24e7dd77fbe2e1c1a848307c7070719956a1ea7bd-runc.y7TA80.mount: Deactivated successfully. Dec 13 14:00:43.819588 systemd-networkd[1417]: lxc_health: Gained IPv6LL Dec 13 14:00:46.082654 systemd[1]: run-containerd-runc-k8s.io-40fdfbef244dab7d97a3a0e24e7dd77fbe2e1c1a848307c7070719956a1ea7bd-runc.YQZiRg.mount: Deactivated successfully. Dec 13 14:00:48.347027 systemd[1]: run-containerd-runc-k8s.io-40fdfbef244dab7d97a3a0e24e7dd77fbe2e1c1a848307c7070719956a1ea7bd-runc.IDLjJt.mount: Deactivated successfully. Dec 13 14:00:48.584859 sshd[4807]: Connection closed by 139.178.68.195 port 33326 Dec 13 14:00:48.586831 sshd-session[4750]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:48.603567 systemd[1]: sshd@31-10.244.15.30:22-139.178.68.195:33326.service: Deactivated successfully. Dec 13 14:00:48.608126 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 14:00:48.612006 systemd-logind[1486]: Session 28 logged out. Waiting for processes to exit. Dec 13 14:00:48.615061 systemd-logind[1486]: Removed session 28.